This job view page is being replaced by Spyglass soon. Check out the new job view.
PRkevtaylor: Promote VolumeSubpathEnvExpansion feature gate to GA
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-10-10 13:08
Elapsed29m15s
Revision
Buildergke-prow-ssd-pool-1a225945-t5v8
Refs master:4fb75e2f
82578:cb8a7c1a
podf22b5ef2-eb5e-11e9-be94-22c1fd76cfb7
infra-commit6f3341f98
podf22b5ef2-eb5e-11e9-be94-22c1fd76cfb7
repok8s.io/kubernetes
repo-commit72a9e676557ae54055a43f042ba9eea316c1eaa7
repos{u'k8s.io/kubernetes': u'master:4fb75e2f0d9a36c47edcf65f89bb92f20274ee56,82578:cb8a7c1a4c36d7716fa01e5ef4b8dc304cad4878'}

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeBinding 1m5s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeBinding$
=== RUN   TestVolumeBinding
W1010 13:33:20.132539  110878 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1010 13:33:20.132576  110878 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1010 13:33:20.132594  110878 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1010 13:33:20.132606  110878 master.go:261] Using reconciler: 
I1010 13:33:20.135737  110878 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.136063  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.136333  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.145879  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.145973  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.167050  110878 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1010 13:33:20.167238  110878 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.167760  110878 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1010 13:33:20.168021  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.173847  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.176171  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.176511  110878 store.go:1342] Monitoring events count at <storage-prefix>//events
I1010 13:33:20.176615  110878 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.177030  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.177079  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.178017  110878 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1010 13:33:20.179726  110878 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1010 13:33:20.179864  110878 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1010 13:33:20.180022  110878 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.180963  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.181098  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.181432  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.181825  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.183578  110878 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1010 13:33:20.183668  110878 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1010 13:33:20.184014  110878 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.184244  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.184295  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.184887  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.185720  110878 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1010 13:33:20.186077  110878 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.186299  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.186335  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.186479  110878 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1010 13:33:20.189073  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.190523  110878 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1010 13:33:20.190785  110878 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.190936  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.190964  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.191053  110878 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1010 13:33:20.192863  110878 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1010 13:33:20.192984  110878 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1010 13:33:20.193198  110878 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.193449  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.193495  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.194205  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.194478  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.195518  110878 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1010 13:33:20.195846  110878 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.195855  110878 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1010 13:33:20.196884  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.196975  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.199515  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.200954  110878 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1010 13:33:20.201245  110878 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1010 13:33:20.201474  110878 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.202501  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.203013  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.203272  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.204427  110878 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1010 13:33:20.204820  110878 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.204984  110878 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1010 13:33:20.205084  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.205123  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.207835  110878 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1010 13:33:20.208163  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.208379  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.208423  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.208453  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.208561  110878 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1010 13:33:20.211682  110878 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1010 13:33:20.211988  110878 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1010 13:33:20.212189  110878 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.212496  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.212528  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.212542  110878 watch_cache.go:451] Replace watchCache (rev: 32396) 
I1010 13:33:20.213921  110878 watch_cache.go:451] Replace watchCache (rev: 32397) 
I1010 13:33:20.215041  110878 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1010 13:33:20.215185  110878 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1010 13:33:20.215876  110878 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.216174  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.216362  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.217330  110878 watch_cache.go:451] Replace watchCache (rev: 32397) 
I1010 13:33:20.219414  110878 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1010 13:33:20.219815  110878 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.221007  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.219715  110878 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1010 13:33:20.221162  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.222498  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.222727  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.224437  110878 watch_cache.go:451] Replace watchCache (rev: 32398) 
I1010 13:33:20.225012  110878 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.225299  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.225313  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.232231  110878 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1010 13:33:20.232270  110878 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1010 13:33:20.232794  110878 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.233395  110878 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.234234  110878 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1010 13:33:20.235726  110878 watch_cache.go:451] Replace watchCache (rev: 32398) 
I1010 13:33:20.235956  110878 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.237400  110878 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.238478  110878 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.239314  110878 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.239928  110878 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.240043  110878 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.240266  110878 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.240613  110878 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.241386  110878 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.241628  110878 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.242651  110878 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.242975  110878 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.243589  110878 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.243792  110878 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.244317  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.244519  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.244633  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.244781  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.244938  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.245138  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.245485  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.246461  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.246893  110878 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.248384  110878 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.249531  110878 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.249958  110878 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.250254  110878 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.251322  110878 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.251591  110878 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.252242  110878 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.252906  110878 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.253405  110878 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.254491  110878 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.254822  110878 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.254953  110878 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1010 13:33:20.254975  110878 master.go:464] Enabling API group "authentication.k8s.io".
I1010 13:33:20.254993  110878 master.go:464] Enabling API group "authorization.k8s.io".
I1010 13:33:20.255217  110878 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.255411  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.255435  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.257656  110878 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1010 13:33:20.257979  110878 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.259675  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.259772  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.262508  110878 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1010 13:33:20.265149  110878 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1010 13:33:20.265773  110878 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.265833  110878 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1010 13:33:20.266138  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.266194  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.270534  110878 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1010 13:33:20.270581  110878 master.go:464] Enabling API group "autoscaling".
I1010 13:33:20.270949  110878 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.271385  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.271472  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.271484  110878 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1010 13:33:20.271974  110878 watch_cache.go:451] Replace watchCache (rev: 32399) 
I1010 13:33:20.272446  110878 watch_cache.go:451] Replace watchCache (rev: 32399) 
I1010 13:33:20.273294  110878 watch_cache.go:451] Replace watchCache (rev: 32399) 
I1010 13:33:20.274535  110878 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1010 13:33:20.274856  110878 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1010 13:33:20.275864  110878 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.277361  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.277437  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.278846  110878 watch_cache.go:451] Replace watchCache (rev: 32400) 
I1010 13:33:20.280863  110878 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1010 13:33:20.281017  110878 master.go:464] Enabling API group "batch".
I1010 13:33:20.280972  110878 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1010 13:33:20.281523  110878 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.282018  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.282158  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.284248  110878 watch_cache.go:451] Replace watchCache (rev: 32401) 
I1010 13:33:20.286464  110878 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1010 13:33:20.286500  110878 master.go:464] Enabling API group "certificates.k8s.io".
I1010 13:33:20.287531  110878 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.287785  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.287818  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.287936  110878 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1010 13:33:20.289369  110878 watch_cache.go:451] Replace watchCache (rev: 32402) 
I1010 13:33:20.289980  110878 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1010 13:33:20.290239  110878 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1010 13:33:20.290329  110878 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.290720  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.290864  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.291439  110878 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1010 13:33:20.291460  110878 master.go:464] Enabling API group "coordination.k8s.io".
I1010 13:33:20.291474  110878 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1010 13:33:20.291503  110878 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1010 13:33:20.291588  110878 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.291718  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.291733  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.292263  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.292416  110878 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1010 13:33:20.292436  110878 master.go:464] Enabling API group "extensions".
I1010 13:33:20.292557  110878 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.292714  110878 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1010 13:33:20.293255  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.293364  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.293385  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.293565  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.295324  110878 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1010 13:33:20.295403  110878 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1010 13:33:20.295811  110878 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.296091  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.296188  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.297010  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.297651  110878 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1010 13:33:20.297792  110878 master.go:464] Enabling API group "networking.k8s.io".
I1010 13:33:20.297899  110878 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1010 13:33:20.297958  110878 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.298232  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.298326  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.298605  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.299960  110878 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1010 13:33:20.300093  110878 master.go:464] Enabling API group "node.k8s.io".
I1010 13:33:20.300024  110878 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1010 13:33:20.300512  110878 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.301364  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.301587  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.301706  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.302981  110878 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1010 13:33:20.303033  110878 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1010 13:33:20.303204  110878 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.303358  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.303382  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.304296  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.305214  110878 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1010 13:33:20.305239  110878 master.go:464] Enabling API group "policy".
I1010 13:33:20.305315  110878 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.305398  110878 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1010 13:33:20.305430  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.305561  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.306563  110878 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1010 13:33:20.306883  110878 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.307105  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.307188  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.306911  110878 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1010 13:33:20.308387  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.308466  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.309283  110878 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1010 13:33:20.309519  110878 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.309688  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.309716  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.309325  110878 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1010 13:33:20.312200  110878 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1010 13:33:20.312298  110878 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1010 13:33:20.313104  110878 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.313372  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.313401  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.314487  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.314616  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.315193  110878 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1010 13:33:20.315329  110878 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.315679  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.315836  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.315863  110878 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1010 13:33:20.317341  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.318779  110878 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1010 13:33:20.319345  110878 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.319071  110878 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1010 13:33:20.320795  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.321961  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.321693  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.323404  110878 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1010 13:33:20.323561  110878 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1010 13:33:20.324876  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.325561  110878 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.325980  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.326108  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.328073  110878 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1010 13:33:20.328495  110878 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.329876  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.328228  110878 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1010 13:33:20.330053  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.330904  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.332467  110878 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1010 13:33:20.332724  110878 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1010 13:33:20.332591  110878 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1010 13:33:20.336144  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.337328  110878 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.337714  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.337849  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.339539  110878 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1010 13:33:20.340716  110878 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.340858  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.340879  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.341083  110878 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1010 13:33:20.342185  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.344001  110878 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1010 13:33:20.344062  110878 master.go:464] Enabling API group "scheduling.k8s.io".
I1010 13:33:20.344056  110878 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1010 13:33:20.344208  110878 master.go:453] Skipping disabled API group "settings.k8s.io".
I1010 13:33:20.345013  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.345683  110878 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.345862  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.345885  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.347047  110878 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1010 13:33:20.347388  110878 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.347495  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.347541  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.347631  110878 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1010 13:33:20.348758  110878 watch_cache.go:451] Replace watchCache (rev: 32403) 
I1010 13:33:20.351158  110878 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1010 13:33:20.351276  110878 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.352567  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.354865  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.351603  110878 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1010 13:33:20.368099  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.368346  110878 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1010 13:33:20.368502  110878 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.368584  110878 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1010 13:33:20.368817  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.368857  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.371509  110878 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1010 13:33:20.371954  110878 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1010 13:33:20.372239  110878 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.372509  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.372557  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.373909  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.374201  110878 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1010 13:33:20.374312  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.374660  110878 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.374882  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.374924  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.375271  110878 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1010 13:33:20.376167  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.376723  110878 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1010 13:33:20.376764  110878 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1010 13:33:20.376824  110878 master.go:464] Enabling API group "storage.k8s.io".
I1010 13:33:20.377178  110878 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.377486  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.377512  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.378477  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.380579  110878 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1010 13:33:20.381166  110878 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1010 13:33:20.381306  110878 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.381508  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.381533  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.383157  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.384923  110878 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1010 13:33:20.386112  110878 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.385027  110878 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1010 13:33:20.386576  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.387138  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.388618  110878 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1010 13:33:20.388976  110878 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.389102  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.389117  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.389222  110878 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1010 13:33:20.390623  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.390841  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.392060  110878 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1010 13:33:20.392416  110878 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.392822  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.392869  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.392836  110878 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1010 13:33:20.394249  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.394390  110878 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1010 13:33:20.394476  110878 master.go:464] Enabling API group "apps".
I1010 13:33:20.394503  110878 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1010 13:33:20.394561  110878 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.395273  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.395616  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.395651  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.396676  110878 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1010 13:33:20.396903  110878 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1010 13:33:20.396843  110878 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.397360  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.397386  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.398697  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.398859  110878 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1010 13:33:20.398703  110878 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1010 13:33:20.399111  110878 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.400265  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.400568  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.401077  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.402058  110878 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1010 13:33:20.402118  110878 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.402370  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.402398  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.402525  110878 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1010 13:33:20.408120  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.408272  110878 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1010 13:33:20.408314  110878 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1010 13:33:20.408404  110878 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.408842  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:20.408882  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:20.409078  110878 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1010 13:33:20.410213  110878 store.go:1342] Monitoring events count at <storage-prefix>//events
I1010 13:33:20.410242  110878 master.go:464] Enabling API group "events.k8s.io".
I1010 13:33:20.410334  110878 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1010 13:33:20.410615  110878 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.410971  110878 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.411076  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.411755  110878 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.412024  110878 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.411001  110878 watch_cache.go:451] Replace watchCache (rev: 32404) 
I1010 13:33:20.412395  110878 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.412581  110878 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.412995  110878 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.413233  110878 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.413545  110878 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.413722  110878 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.415770  110878 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.416160  110878 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.417773  110878 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.418164  110878 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.419622  110878 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.420128  110878 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.421625  110878 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.421983  110878 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.423195  110878 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.423654  110878 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 13:33:20.423732  110878 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1010 13:33:20.424799  110878 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.425061  110878 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.425802  110878 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.426982  110878 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.428277  110878 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.429880  110878 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.430314  110878 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.431579  110878 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.432567  110878 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.433227  110878 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.434257  110878 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 13:33:20.434343  110878 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1010 13:33:20.435515  110878 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.436128  110878 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.438035  110878 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.439177  110878 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.439937  110878 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.441376  110878 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.444348  110878 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.446047  110878 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.446911  110878 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.448223  110878 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.449874  110878 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 13:33:20.450022  110878 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1010 13:33:20.451147  110878 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.452072  110878 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 13:33:20.452241  110878 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1010 13:33:20.453495  110878 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.454380  110878 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.454716  110878 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.455608  110878 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.456633  110878 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.457391  110878 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.458290  110878 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 13:33:20.458544  110878 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1010 13:33:20.460362  110878 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.461568  110878 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.462139  110878 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.463373  110878 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.463839  110878 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.464349  110878 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.465460  110878 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.466168  110878 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.466737  110878 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.468284  110878 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.468800  110878 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.469372  110878 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 13:33:20.469457  110878 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1010 13:33:20.469470  110878 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1010 13:33:20.470630  110878 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.471536  110878 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.472903  110878 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.473896  110878 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.475294  110878 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"d82f0006-70be-429b-bed8-090d5fff3021", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 13:33:20.494231  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.494283  110878 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1010 13:33:20.494295  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.494317  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.494328  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.494337  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.494384  110878 httplog.go:90] GET /healthz: (790.156µs) 0 [Go-http-client/1.1 127.0.0.1:44088]
I1010 13:33:20.497015  110878 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (4.052156ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44090]
I1010 13:33:20.506034  110878 httplog.go:90] GET /api/v1/services: (4.895989ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44090]
I1010 13:33:20.520807  110878 httplog.go:90] GET /api/v1/services: (2.597906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44090]
I1010 13:33:20.523421  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.523463  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.523476  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.523487  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.523496  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.523525  110878 httplog.go:90] GET /healthz: (209.549µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44088]
I1010 13:33:20.526210  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.55815ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44090]
I1010 13:33:20.528267  110878 httplog.go:90] GET /api/v1/services: (2.067194ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44088]
I1010 13:33:20.528601  110878 httplog.go:90] GET /api/v1/services: (1.682051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:20.532660  110878 httplog.go:90] POST /api/v1/namespaces: (5.221308ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44090]
I1010 13:33:20.534640  110878 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.404102ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:20.538169  110878 httplog.go:90] POST /api/v1/namespaces: (2.898973ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:20.540164  110878 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.537043ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:20.542497  110878 httplog.go:90] POST /api/v1/namespaces: (1.826314ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:20.596068  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.596112  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.596154  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.596180  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.596189  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.596243  110878 httplog.go:90] GET /healthz: (374.301µs) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:20.625032  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.625780  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.625936  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.626022  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.626165  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.626547  110878 httplog.go:90] GET /healthz: (1.843142ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:20.696847  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.696897  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.696906  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.696913  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.696920  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.697207  110878 httplog.go:90] GET /healthz: (679.449µs) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:20.728674  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.728716  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.728728  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.728761  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.728770  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.728900  110878 httplog.go:90] GET /healthz: (497.887µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:20.795572  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.795610  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.795623  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.795633  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.795642  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.795673  110878 httplog.go:90] GET /healthz: (270.756µs) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:20.824967  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.825005  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.825019  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.825029  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.825038  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.825088  110878 httplog.go:90] GET /healthz: (294.148µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:20.895554  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.895599  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.895613  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.895623  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.895632  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.895683  110878 httplog.go:90] GET /healthz: (290.791µs) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:20.924891  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.924935  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.924949  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.924959  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.924968  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.925022  110878 httplog.go:90] GET /healthz: (325.807µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:20.995618  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:20.995658  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:20.995671  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:20.995681  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:20.995693  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:20.995725  110878 httplog.go:90] GET /healthz: (320.361µs) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:21.026795  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:21.026841  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.026856  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.026866  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:21.026875  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:21.026920  110878 httplog.go:90] GET /healthz: (352.782µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.098100  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:21.098147  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.098162  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.098173  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:21.098181  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:21.098217  110878 httplog.go:90] GET /healthz: (305.991µs) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:21.125003  110878 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 13:33:21.125043  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.125056  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.125068  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:21.125076  110878 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:21.125136  110878 httplog.go:90] GET /healthz: (320.607µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.132473  110878 client.go:361] parsed scheme: "endpoint"
I1010 13:33:21.132572  110878 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 13:33:21.205398  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.205431  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.205450  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:21.205475  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:21.205547  110878 httplog.go:90] GET /healthz: (3.664685ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:21.226158  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.226198  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.226236  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:21.226245  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:21.226292  110878 httplog.go:90] GET /healthz: (1.513161ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.297218  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.297252  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.297281  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:21.297291  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:21.297351  110878 httplog.go:90] GET /healthz: (1.832617ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:21.326312  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.326351  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.326362  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:21.326372  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:21.326434  110878 httplog.go:90] GET /healthz: (1.745161ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.396824  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.396859  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.396872  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:21.396881  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:21.396942  110878 httplog.go:90] GET /healthz: (1.511475ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:21.431009  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.431041  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.431051  110878 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 13:33:21.431060  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 13:33:21.431117  110878 httplog.go:90] GET /healthz: (3.815908ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.484599  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.193899ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.486616  110878 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.600208ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.487163  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.58782ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44286]
I1010 13:33:21.487514  110878 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (5.393273ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44088]
I1010 13:33:21.508467  110878 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (18.819797ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44088]
I1010 13:33:21.509399  110878 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (21.414363ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.510194  110878 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1010 13:33:21.510810  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (21.456436ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44286]
I1010 13:33:21.514056  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.514089  110878 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 13:33:21.514100  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.514168  110878 httplog.go:90] GET /healthz: (3.908032ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:21.516047  110878 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.078681ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44088]
I1010 13:33:21.517345  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.800322ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44286]
I1010 13:33:21.521571  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (3.89664ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44286]
I1010 13:33:21.521268  110878 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (4.753549ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.521839  110878 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1010 13:33:21.521863  110878 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1010 13:33:21.524005  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.86245ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.526203  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.526336  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.526490  110878 httplog.go:90] GET /healthz: (1.856019ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.527286  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.478576ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.528391  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (854.376µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.529702  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (654.59µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.531255  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (951.611µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.532521  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (948.655µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.535862  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.840398ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.536055  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1010 13:33:21.537112  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (915.67µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.539368  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.896683ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.540461  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1010 13:33:21.545888  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (3.802115ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.548055  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.795672ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.548398  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1010 13:33:21.549529  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (872.313µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.553315  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.227128ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.553639  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1010 13:33:21.554852  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (837.421µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.556837  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.657568ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.557042  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1010 13:33:21.559197  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (818.377µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.561165  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.587445ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.561355  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1010 13:33:21.562514  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.00159ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.564679  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.337371ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.564841  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1010 13:33:21.565659  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (679.22µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.567467  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.134966ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.567596  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1010 13:33:21.568418  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (719.408µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.570317  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.417609ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.570541  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1010 13:33:21.571698  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.035509ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.574025  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.971197ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.574490  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1010 13:33:21.575394  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (681.363µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.577206  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.278753ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.577486  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1010 13:33:21.578373  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (668.399µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.580275  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.620453ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.580464  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1010 13:33:21.581331  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (751.824µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.582651  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.112883ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.582840  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1010 13:33:21.584819  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (797.41µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.585676  110878 cacher.go:785] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I1010 13:33:21.587671  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.931625ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.588539  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1010 13:33:21.589613  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (834.404µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.591665  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.418395ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.592133  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1010 13:33:21.593044  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (753.44µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.594558  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.286677ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.594712  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1010 13:33:21.596376  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.596398  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.596428  110878 httplog.go:90] GET /healthz: (1.261032ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:21.597915  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (3.059856ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.599476  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.290293ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.599596  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1010 13:33:21.600397  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (710.869µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.602100  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.399763ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.602517  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1010 13:33:21.603490  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (839.573µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.605055  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.3267ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.605252  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1010 13:33:21.606175  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (817.896µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.608233  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.648914ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.608375  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1010 13:33:21.609280  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (753.634µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.611594  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.80759ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.612677  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1010 13:33:21.615804  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (2.796665ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.617794  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.659449ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.617990  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1010 13:33:21.619093  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (930.479µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.621102  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.580897ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.621596  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1010 13:33:21.622707  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (729.191µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.625969  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.626098  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.626280  110878 httplog.go:90] GET /healthz: (1.225401ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.627434  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.681643ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.627947  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1010 13:33:21.629345  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.070915ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.632015  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.848404ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.632399  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1010 13:33:21.634663  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.943526ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.643447  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.69079ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.644027  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1010 13:33:21.645517  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.31563ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.647692  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.792233ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.648223  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1010 13:33:21.649465  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.085334ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.651781  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.781326ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.652233  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1010 13:33:21.653891  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.436769ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.655991  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.618092ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.656286  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1010 13:33:21.657553  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.057511ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.659526  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.594977ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.659949  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1010 13:33:21.661442  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.097683ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.663703  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.869095ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.664149  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1010 13:33:21.665347  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.078142ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.668429  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.626156ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.668811  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1010 13:33:21.670060  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (733.297µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.672163  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.732353ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.672519  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1010 13:33:21.673581  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (958.359µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.676019  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.977596ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.676282  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1010 13:33:21.677342  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (898.116µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.679281  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.452782ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.679661  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1010 13:33:21.680573  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (742.089µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.682169  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.321189ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.682342  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1010 13:33:21.683293  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (819.025µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.685188  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.618281ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.685392  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1010 13:33:21.686493  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (781.491µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.689010  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.94506ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.689453  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1010 13:33:21.690674  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.071571ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.694999  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.65286ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.695200  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1010 13:33:21.696943  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.696967  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.696998  110878 httplog.go:90] GET /healthz: (1.302642ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:21.697003  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.661097ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.721510  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (23.292742ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.721950  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1010 13:33:21.726383  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.726409  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.726447  110878 httplog.go:90] GET /healthz: (1.37766ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.728513  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (5.602431ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.732248  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.649367ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.733921  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1010 13:33:21.735186  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.063363ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.737862  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.325774ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.738124  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1010 13:33:21.739257  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (954.456µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.741059  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.501824ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.741255  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1010 13:33:21.742665  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.22456ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.744807  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.673574ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.745259  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1010 13:33:21.746441  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (911.273µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.749307  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.893624ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.749494  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1010 13:33:21.750549  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (925.878µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.758456  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.018475ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.759955  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1010 13:33:21.761511  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.282881ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.764465  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.188506ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.765027  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1010 13:33:21.772698  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (5.864445ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.775559  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.4142ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.775777  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1010 13:33:21.776892  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (971.381µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.779259  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.852287ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.779643  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1010 13:33:21.782179  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.380357ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.794129  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.888926ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.795226  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1010 13:33:21.798675  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.798718  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.798793  110878 httplog.go:90] GET /healthz: (3.514565ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:21.799138  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (3.673414ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.802709  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.901906ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.803193  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1010 13:33:21.804553  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.169378ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.806376  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.445891ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.806602  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1010 13:33:21.808406  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.365896ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.812934  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.567686ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.813962  110878 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1010 13:33:21.815705  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.531777ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.826049  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.946655ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:21.826604  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1010 13:33:21.826640  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.826661  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.826696  110878 httplog.go:90] GET /healthz: (1.233037ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.850313  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.13832ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.864112  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.031504ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.864366  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1010 13:33:21.882585  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.419145ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.896739  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.896819  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.896900  110878 httplog.go:90] GET /healthz: (1.445694ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:21.902911  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.849307ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.903148  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1010 13:33:21.922498  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.302617ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.925618  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.925824  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.926024  110878 httplog.go:90] GET /healthz: (1.451754ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.943469  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.346749ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.943814  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1010 13:33:21.964625  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (3.430449ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.983929  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.643893ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:21.984208  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1010 13:33:21.996540  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:21.996573  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:21.996624  110878 httplog.go:90] GET /healthz: (1.315825ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:22.002603  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.394756ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.025627  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.285584ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.025962  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1010 13:33:22.026286  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.026313  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.026373  110878 httplog.go:90] GET /healthz: (1.231773ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.042975  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.708894ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.064025  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.853469ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.064258  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1010 13:33:22.082729  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.583958ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.096807  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.096847  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.096888  110878 httplog.go:90] GET /healthz: (1.382352ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:22.103472  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.444929ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.103780  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1010 13:33:22.122705  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.534052ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.125393  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.125423  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.125455  110878 httplog.go:90] GET /healthz: (902.567µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.144182  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.843689ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.144441  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1010 13:33:22.162926  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.73908ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.185580  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.167799ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.186115  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1010 13:33:22.199449  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.199495  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.199538  110878 httplog.go:90] GET /healthz: (1.6266ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:22.202226  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.183541ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.223538  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.329387ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.223852  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1010 13:33:22.225324  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.225352  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.225379  110878 httplog.go:90] GET /healthz: (856.631µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.242937  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.677047ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.264434  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.221395ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.264714  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1010 13:33:22.282570  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.374145ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.297493  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.297534  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.297588  110878 httplog.go:90] GET /healthz: (2.183562ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:22.303998  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.865979ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.304312  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1010 13:33:22.327563  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (6.3407ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.329418  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.329447  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.329485  110878 httplog.go:90] GET /healthz: (2.251078ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.343608  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.398399ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.343984  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1010 13:33:22.362605  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.388407ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.383858  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.710283ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.384140  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1010 13:33:22.397220  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.397276  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.397320  110878 httplog.go:90] GET /healthz: (1.963761ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:22.402479  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.3199ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.424155  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.031805ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.424446  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1010 13:33:22.425164  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.425187  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.425239  110878 httplog.go:90] GET /healthz: (749.237µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.442839  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.676316ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.464904  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.427171ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.465168  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1010 13:33:22.482473  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.404422ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.496312  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.496350  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.496389  110878 httplog.go:90] GET /healthz: (1.091274ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:22.503618  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.53665ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.504189  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1010 13:33:22.522710  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.553588ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.529274  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.529336  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.529393  110878 httplog.go:90] GET /healthz: (3.364383ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.546344  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.082026ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.546738  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1010 13:33:22.563068  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.929496ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.585258  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.960444ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.585728  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1010 13:33:22.597365  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.597408  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.597453  110878 httplog.go:90] GET /healthz: (2.137808ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:22.602204  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.117975ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.624457  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.064108ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:22.624688  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1010 13:33:22.625595  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.625628  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.625662  110878 httplog.go:90] GET /healthz: (1.066499ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.642254  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.19539ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.663538  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.407714ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.663861  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1010 13:33:22.684825  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (3.504903ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.697981  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.698263  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.698547  110878 httplog.go:90] GET /healthz: (3.167481ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:22.703739  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.673247ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.704135  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1010 13:33:22.723359  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.62301ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.728252  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.728574  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.728818  110878 httplog.go:90] GET /healthz: (3.673803ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.743830  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.708052ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.744454  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1010 13:33:22.762465  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.336921ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.783444  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.189269ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.783918  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1010 13:33:22.797057  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.797084  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.797126  110878 httplog.go:90] GET /healthz: (1.725366ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:22.802508  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.449243ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.823718  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.478837ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.824027  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1010 13:33:22.826664  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.826696  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.826785  110878 httplog.go:90] GET /healthz: (1.03138ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.851410  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.454634ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.863557  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.459314ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.863812  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1010 13:33:22.884125  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.942402ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.896348  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.896385  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.896444  110878 httplog.go:90] GET /healthz: (1.008705ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:22.910804  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.352256ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.911680  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1010 13:33:22.922308  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.234402ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.925333  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.925366  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.925399  110878 httplog.go:90] GET /healthz: (950.92µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.943331  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.215123ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.943550  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1010 13:33:22.962850  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.634008ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.984415  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.251767ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:22.985126  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1010 13:33:22.996198  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:22.996230  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:22.996265  110878 httplog.go:90] GET /healthz: (952.116µs) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:23.002349  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.24628ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.023412  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.3187ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.023846  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1010 13:33:23.026175  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.026218  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.026261  110878 httplog.go:90] GET /healthz: (1.186379ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.042577  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.390993ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.063158  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.017565ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.063401  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1010 13:33:23.083149  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.720307ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.097269  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.097308  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.097369  110878 httplog.go:90] GET /healthz: (2.034012ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:23.103421  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.283633ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.103641  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1010 13:33:23.122322  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.118761ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.125867  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.125902  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.125937  110878 httplog.go:90] GET /healthz: (1.45029ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.144346  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.112963ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.144706  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1010 13:33:23.162804  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.510265ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.183256  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.098867ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.183668  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1010 13:33:23.196442  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.196466  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.196507  110878 httplog.go:90] GET /healthz: (1.202122ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:23.202292  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.27433ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.224054  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.791734ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.224360  110878 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1010 13:33:23.226161  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.226191  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.226248  110878 httplog.go:90] GET /healthz: (1.535459ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.242587  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.338587ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.244550  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.291394ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.264401  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.261512ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.264671  110878 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1010 13:33:23.282760  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.502128ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.285115  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.702554ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.297881  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.297916  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.297962  110878 httplog.go:90] GET /healthz: (2.298426ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:23.303328  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.239387ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.303804  110878 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1010 13:33:23.322619  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.444589ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.324598  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.501765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.325466  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.325507  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.325535  110878 httplog.go:90] GET /healthz: (1.006692ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.343655  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.557587ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.344021  110878 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1010 13:33:23.363380  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (2.126143ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.366242  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.335711ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.384218  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.936938ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.384519  110878 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1010 13:33:23.397257  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.397291  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.397329  110878 httplog.go:90] GET /healthz: (1.992023ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:23.404646  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (3.637813ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.407607  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.097512ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.423484  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.376276ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.423709  110878 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1010 13:33:23.425418  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.425633  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.425866  110878 httplog.go:90] GET /healthz: (1.388763ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.442496  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.363171ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.444498  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.517486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.463225  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.039495ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.463453  110878 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1010 13:33:23.483006  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.90078ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.485229  110878 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.178759ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.496697  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.496728  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.496812  110878 httplog.go:90] GET /healthz: (1.387413ms) 0 [Go-http-client/1.1 127.0.0.1:44288]
I1010 13:33:23.503430  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.386576ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.503658  110878 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1010 13:33:23.522679  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.575184ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.524716  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.561421ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.525551  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.525576  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.525616  110878 httplog.go:90] GET /healthz: (1.012755ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.543894  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.782796ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.544155  110878 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1010 13:33:23.562852  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.697879ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.564741  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.153222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.583834  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.67795ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.584198  110878 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1010 13:33:23.598067  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.598106  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.598155  110878 httplog.go:90] GET /healthz: (1.127812ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:23.602279  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.25024ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.604102  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.420236ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.623656  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.447153ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.623954  110878 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1010 13:33:23.625406  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.625436  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.625468  110878 httplog.go:90] GET /healthz: (945.223µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.642768  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.505022ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.644711  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.488543ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.663190  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.091064ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.663517  110878 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1010 13:33:23.682466  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.338471ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.685262  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.402418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.697209  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.697235  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.697269  110878 httplog.go:90] GET /healthz: (1.228034ms) 0 [Go-http-client/1.1 127.0.0.1:44094]
I1010 13:33:23.703167  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.206139ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.703432  110878 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1010 13:33:23.722574  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.409954ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.724100  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.070441ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.725336  110878 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 13:33:23.725361  110878 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 13:33:23.725390  110878 httplog.go:90] GET /healthz: (882.833µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.743311  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.10464ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.743606  110878 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1010 13:33:23.762914  110878 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.775008ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.765135  110878 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.521314ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.784566  110878 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.373007ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.784935  110878 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1010 13:33:23.797038  110878 httplog.go:90] GET /healthz: (1.690671ms) 200 [Go-http-client/1.1 127.0.0.1:44094]
W1010 13:33:23.797948  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.797982  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.797999  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.798014  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.798027  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.798037  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.798052  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.798066  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.798081  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.798100  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.798113  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1010 13:33:23.798182  110878 factory.go:289] Creating scheduler from algorithm provider 'DefaultProvider'
I1010 13:33:23.798196  110878 factory.go:377] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1010 13:33:23.799417  110878 reflector.go:150] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.799445  110878 reflector.go:185] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.799494  110878 reflector.go:150] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.799512  110878 reflector.go:185] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.799530  110878 reflector.go:150] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.799556  110878 reflector.go:185] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.799847  110878 reflector.go:150] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.799863  110878 reflector.go:185] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.799925  110878 reflector.go:150] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.799937  110878 reflector.go:185] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.800219  110878 reflector.go:150] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.800241  110878 reflector.go:185] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.800322  110878 reflector.go:150] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.800336  110878 reflector.go:185] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.800337  110878 reflector.go:150] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.800351  110878 reflector.go:185] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.800772  110878 reflector.go:150] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.800805  110878 reflector.go:185] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.802087  110878 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (1.027469ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.802451  110878 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (626.807µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44492]
I1010 13:33:23.802684  110878 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (477.862µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44484]
I1010 13:33:23.802738  110878 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (903.941µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44482]
I1010 13:33:23.803000  110878 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (2.015763ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:33:23.803240  110878 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (2.061623ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44480]
I1010 13:33:23.803643  110878 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (811.353µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44486]
I1010 13:33:23.805291  110878 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=32404 labels= fields= timeout=8m12s
I1010 13:33:23.806274  110878 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (555.906µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:33:23.809009  110878 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (680.61µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44488]
I1010 13:33:23.809441  110878 reflector.go:150] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.809464  110878 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=32404 labels= fields= timeout=9m2s
I1010 13:33:23.809519  110878 reflector.go:185] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.809668  110878 get.go:251] Starting watch for /api/v1/services, rv=32398 labels= fields= timeout=9m42s
I1010 13:33:23.810291  110878 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (567.562µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44486]
I1010 13:33:23.810668  110878 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=32403 labels= fields= timeout=9m9s
I1010 13:33:23.810908  110878 reflector.go:150] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.810937  110878 reflector.go:185] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.812679  110878 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=32396 labels= fields= timeout=7m28s
I1010 13:33:23.813307  110878 get.go:251] Starting watch for /api/v1/nodes, rv=32396 labels= fields= timeout=9m16s
I1010 13:33:23.815156  110878 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (3.896823ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44490]
I1010 13:33:23.822214  110878 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=32404 labels= fields= timeout=6m22s
I1010 13:33:23.823203  110878 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=32396 labels= fields= timeout=7m2s
I1010 13:33:23.823250  110878 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=32398 labels= fields= timeout=9m43s
I1010 13:33:23.823388  110878 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=32404 labels= fields= timeout=9m32s
I1010 13:33:23.824090  110878 get.go:251] Starting watch for /api/v1/pods, rv=32397 labels= fields= timeout=7m29s
I1010 13:33:23.825836  110878 httplog.go:90] GET /healthz: (1.120206ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I1010 13:33:23.827296  110878 httplog.go:90] GET /api/v1/namespaces/default: (1.014889ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I1010 13:33:23.829302  110878 httplog.go:90] POST /api/v1/namespaces: (1.634142ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I1010 13:33:23.830663  110878 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (984.347µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I1010 13:33:23.836714  110878 httplog.go:90] POST /api/v1/namespaces/default/services: (5.572174ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I1010 13:33:23.838416  110878 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.170115ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I1010 13:33:23.842283  110878 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.969972ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I1010 13:33:23.909472  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909521  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909529  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909535  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909542  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909549  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909555  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909561  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909567  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909578  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909584  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909593  110878 shared_informer.go:227] caches populated
I1010 13:33:23.909942  110878 plugins.go:630] Loaded volume plugin "kubernetes.io/mock-provisioner"
W1010 13:33:23.909972  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.910011  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.910035  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.910098  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 13:33:23.910110  110878 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1010 13:33:23.910170  110878 pv_controller_base.go:289] Starting persistent volume controller
I1010 13:33:23.910723  110878 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1010 13:33:23.910429  110878 reflector.go:150] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.910822  110878 reflector.go:185] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.910478  110878 reflector.go:150] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.911236  110878 reflector.go:185] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.910533  110878 reflector.go:150] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.911635  110878 reflector.go:185] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.911983  110878 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (676.656µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I1010 13:33:23.910541  110878 reflector.go:150] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.912349  110878 reflector.go:185] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.913350  110878 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (444.5µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44512]
I1010 13:33:23.914445  110878 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (412.558µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44514]
I1010 13:33:23.910646  110878 reflector.go:150] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.914720  110878 reflector.go:185] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I1010 13:33:23.915161  110878 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=32396 labels= fields= timeout=8m45s
I1010 13:33:23.915829  110878 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=32396 labels= fields= timeout=9m45s
I1010 13:33:23.916247  110878 get.go:251] Starting watch for /api/v1/nodes, rv=32396 labels= fields= timeout=7m40s
I1010 13:33:23.916305  110878 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (2.583978ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I1010 13:33:23.916614  110878 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (900.052µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44522]
I1010 13:33:23.917233  110878 get.go:251] Starting watch for /api/v1/pods, rv=32397 labels= fields= timeout=9m19s
I1010 13:33:23.917481  110878 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=32404 labels= fields= timeout=8m20s
I1010 13:33:24.010407  110878 shared_informer.go:227] caches populated
I1010 13:33:24.010454  110878 shared_informer.go:227] caches populated
I1010 13:33:24.010461  110878 shared_informer.go:227] caches populated
I1010 13:33:24.010465  110878 shared_informer.go:227] caches populated
I1010 13:33:24.010469  110878 shared_informer.go:227] caches populated
I1010 13:33:24.010935  110878 shared_informer.go:227] caches populated
I1010 13:33:24.010956  110878 shared_informer.go:204] Caches are synced for persistent volume 
I1010 13:33:24.010975  110878 pv_controller_base.go:160] controller initialized
I1010 13:33:24.011100  110878 pv_controller_base.go:426] resyncing PV controller
I1010 13:33:24.016900  110878 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I1010 13:33:24.017573  110878 httplog.go:90] POST /api/v1/nodes: (6.492384ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.019845  110878 httplog.go:90] POST /api/v1/nodes: (1.682283ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.020774  110878 node_tree.go:93] Added node "node-2" in group "" to NodeTree
I1010 13:33:24.023629  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.40055ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.025896  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.42208ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.026231  110878 volume_binding_test.go:191] Running test immediate pvc prebound
I1010 13:33:24.028341  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.826909ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.031028  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.111377ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.036177  110878 httplog.go:90] POST /api/v1/persistentvolumes: (4.512131ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.036897  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-pvc-prebound", version 32726
I1010 13:33:24.037097  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:24.037125  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1010 13:33:24.037134  110878 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1010 13:33:24.040359  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.741229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.041156  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32727
I1010 13:33:24.041374  110878 pv_controller.go:800] volume "pv-i-pvc-prebound" entered phase "Available"
I1010 13:33:24.041502  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32727
I1010 13:33:24.041612  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1010 13:33:24.041739  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1010 13:33:24.041565  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (4.74418ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.041875  110878 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1010 13:33:24.042180  110878 pv_controller.go:782] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I1010 13:33:24.042474  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound", version 32728
I1010 13:33:24.042507  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1010 13:33:24.042534  110878 pv_controller.go:349] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1010 13:33:24.042548  110878 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1010 13:33:24.042561  110878 pv_controller.go:372] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: volume is unbound, binding
I1010 13:33:24.042575  110878 pv_controller.go:933] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound"
I1010 13:33:24.042583  110878 pv_controller.go:831] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound"
I1010 13:33:24.042614  110878 pv_controller.go:851] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I1010 13:33:24.044989  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (2.037465ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.045548  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32729
I1010 13:33:24.045581  110878 pv_controller.go:864] updating PersistentVolume[pv-i-pvc-prebound]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound"
I1010 13:33:24.045593  110878 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 13:33:24.045821  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32729
I1010 13:33:24.045868  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound (uid: 84a85334-ee0a-4080-9b2d-bbc37275e498)", boundByController: true
I1010 13:33:24.045880  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound
I1010 13:33:24.045896  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1010 13:33:24.045909  110878 pv_controller.go:621] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1010 13:33:24.045916  110878 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 13:33:24.048552  110878 store.go:365] GuaranteedUpdate of /d82f0006-70be-429b-bed8-090d5fff3021/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I1010 13:33:24.048566  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.692095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.048844  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.278302ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44530]
I1010 13:33:24.049022  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32730
I1010 13:33:24.049054  110878 pv_controller.go:800] volume "pv-i-pvc-prebound" entered phase "Bound"
I1010 13:33:24.049068  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1010 13:33:24.049103  110878 pv_controller.go:903] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound"
I1010 13:33:24.049180  110878 pv_controller.go:792] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:33:24.049225  110878 pv_controller_base.go:204] could not sync volume "pv-i-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:33:24.049255  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32730
I1010 13:33:24.049290  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound (uid: 84a85334-ee0a-4080-9b2d-bbc37275e498)", boundByController: true
I1010 13:33:24.049301  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound
I1010 13:33:24.049320  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1010 13:33:24.049332  110878 pv_controller.go:621] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1010 13:33:24.049339  110878 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 13:33:24.049353  110878 pv_controller.go:782] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1010 13:33:24.051135  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-prebound: (1.776413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44530]
I1010 13:33:24.051405  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound" with version 32731
I1010 13:33:24.051650  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: bound to "pv-i-pvc-prebound"
I1010 13:33:24.051907  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound] status: set phase Bound
I1010 13:33:24.056921  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (13.897449ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.057869  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-prebound/status: (5.350209ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44530]
I1010 13:33:24.058064  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pvc-prebound
I1010 13:33:24.058238  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pvc-prebound
I1010 13:33:24.058426  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound" with version 32734
I1010 13:33:24.058457  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound" entered phase "Bound"
I1010 13:33:24.058471  110878 pv_controller.go:959] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound"
I1010 13:33:24.058498  110878 pv_controller.go:960] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound (uid: 84a85334-ee0a-4080-9b2d-bbc37275e498)", boundByController: true
I1010 13:33:24.058516  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1010 13:33:24.058678  110878 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pvc-prebound" match with Node "node-1"
I1010 13:33:24.058551  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound" with version 32734
I1010 13:33:24.058831  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1010 13:33:24.058959  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: volume "pv-i-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound (uid: 84a85334-ee0a-4080-9b2d-bbc37275e498)", boundByController: true
I1010 13:33:24.058984  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: claim is already correctly bound
I1010 13:33:24.058994  110878 pv_controller.go:933] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound"
I1010 13:33:24.059008  110878 pv_controller.go:831] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound"
I1010 13:33:24.059052  110878 pv_controller.go:843] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound"
I1010 13:33:24.059067  110878 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1010 13:33:24.059077  110878 pv_controller.go:782] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1010 13:33:24.059087  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1010 13:33:24.059127  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound]: already bound to "pv-i-pvc-prebound"
I1010 13:33:24.059138  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound] status: set phase Bound
I1010 13:33:24.059157  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound] status: phase Bound already set
I1010 13:33:24.059194  110878 pv_controller.go:959] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound"
I1010 13:33:24.059222  110878 pv_controller.go:960] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound (uid: 84a85334-ee0a-4080-9b2d-bbc37275e498)", boundByController: true
I1010 13:33:24.059239  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1010 13:33:24.059367  110878 scheduler_binder.go:653] PersistentVolume "pv-i-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pvc-prebound": No matching NodeSelectorTerms
I1010 13:33:24.059630  110878 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pvc-prebound", node "node-1"
I1010 13:33:24.059808  110878 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I1010 13:33:24.060051  110878 factory.go:710] Attempting to bind pod-i-pvc-prebound to node-1
I1010 13:33:24.063299  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pvc-prebound/binding: (2.742051ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.063850  110878 scheduler.go:730] pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pvc-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 13:33:24.069571  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (5.168504ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.160002  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pvc-prebound: (2.091854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.162721  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-prebound: (1.849402ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.164670  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-pvc-prebound: (1.353064ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.174038  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (8.624773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.184803  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (6.448044ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.185313  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound" deleted
I1010 13:33:24.185473  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32730
I1010 13:33:24.185609  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound (uid: 84a85334-ee0a-4080-9b2d-bbc37275e498)", boundByController: true
I1010 13:33:24.185728  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound
I1010 13:33:24.188449  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-prebound: (2.301872ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.189609  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound not found
I1010 13:33:24.191236  110878 pv_controller.go:577] volume "pv-i-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1010 13:33:24.191417  110878 pv_controller.go:779] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released
I1010 13:33:24.195497  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (8.575176ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.197154  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.912419ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.197476  110878 pv_controller.go:792] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": StorageError: invalid object, Code: 4, Key: /d82f0006-70be-429b-bed8-090d5fff3021/persistentvolumes/pv-i-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3a10eb8b-81a9-4d7f-976c-073152284651, UID in object meta: 
I1010 13:33:24.197708  110878 pv_controller_base.go:204] could not sync volume "pv-i-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": StorageError: invalid object, Code: 4, Key: /d82f0006-70be-429b-bed8-090d5fff3021/persistentvolumes/pv-i-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3a10eb8b-81a9-4d7f-976c-073152284651, UID in object meta: 
I1010 13:33:24.197908  110878 pv_controller_base.go:216] volume "pv-i-pvc-prebound" deleted
I1010 13:33:24.198019  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-prebound" was already processed
I1010 13:33:24.212817  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (15.313667ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.215395  110878 volume_binding_test.go:191] Running test wait can bind
I1010 13:33:24.217815  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.104436ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.220076  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.872289ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.222734  110878 httplog.go:90] POST /api/v1/persistentvolumes: (2.105921ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.223172  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind", version 32756
I1010 13:33:24.223338  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:24.223451  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1010 13:33:24.223516  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Available
I1010 13:33:24.225658  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.017168ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.225870  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (2.012122ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.227052  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32758
I1010 13:33:24.227095  110878 pv_controller.go:800] volume "pv-w-canbind" entered phase "Available"
I1010 13:33:24.227135  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32758
I1010 13:33:24.227165  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I1010 13:33:24.227187  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1010 13:33:24.227194  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Available
I1010 13:33:24.227207  110878 pv_controller.go:782] updating PersistentVolume[pv-w-canbind]: phase Available already set
I1010 13:33:24.227233  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind", version 32757
I1010 13:33:24.227271  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:24.227322  110878 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: no volume found
I1010 13:33:24.227345  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind] status: set phase Pending
I1010 13:33:24.227377  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind] status: phase Pending already set
I1010 13:33:24.227428  110878 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4", Name:"pvc-w-canbind", UID:"c460d561-c474-4b9e-8751-f626c6de43e2", APIVersion:"v1", ResourceVersion:"32757", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 13:33:24.229730  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (1.904227ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:24.232986  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (3.800591ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.234234  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind
I1010 13:33:24.234251  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind
I1010 13:33:24.234549  110878 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind" on node "node-1"
I1010 13:33:24.234636  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" on node "node-2"
I1010 13:33:24.234679  110878 scheduler_binder.go:725] storage class "wait-s2tn" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" does not support dynamic provisioning
I1010 13:33:24.234725  110878 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind", node "node-1"
I1010 13:33:24.234788  110878 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind", version 32758
I1010 13:33:24.234922  110878 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind", node "node-1"
I1010 13:33:24.234967  110878 scheduler_binder.go:404] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" bound to volume "pv-w-canbind"
I1010 13:33:24.237687  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind: (2.37612ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.238121  110878 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.238199  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32761
I1010 13:33:24.238227  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind (uid: c460d561-c474-4b9e-8751-f626c6de43e2)", boundByController: true
I1010 13:33:24.238235  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind
I1010 13:33:24.238251  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:24.238261  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:24.238284  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" with version 32757
I1010 13:33:24.238300  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:24.238322  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind (uid: c460d561-c474-4b9e-8751-f626c6de43e2)", boundByController: true
I1010 13:33:24.238331  110878 pv_controller.go:933] binding volume "pv-w-canbind" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.238339  110878 pv_controller.go:831] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.238354  110878 pv_controller.go:843] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.238364  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1010 13:33:24.240814  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (2.115289ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.241045  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32762
I1010 13:33:24.241076  110878 pv_controller.go:800] volume "pv-w-canbind" entered phase "Bound"
I1010 13:33:24.241093  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: binding to "pv-w-canbind"
I1010 13:33:24.241130  110878 pv_controller.go:903] volume "pv-w-canbind" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.241437  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32762
I1010 13:33:24.241647  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind (uid: c460d561-c474-4b9e-8751-f626c6de43e2)", boundByController: true
I1010 13:33:24.241731  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind
I1010 13:33:24.241875  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:24.241986  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:24.243721  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind: (2.347917ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.243978  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" with version 32763
I1010 13:33:24.244020  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: bound to "pv-w-canbind"
I1010 13:33:24.244032  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind] status: set phase Bound
I1010 13:33:24.246263  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind/status: (1.841746ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.246578  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" with version 32764
I1010 13:33:24.246602  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" entered phase "Bound"
I1010 13:33:24.246615  110878 pv_controller.go:959] volume "pv-w-canbind" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.246631  110878 pv_controller.go:960] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind (uid: c460d561-c474-4b9e-8751-f626c6de43e2)", boundByController: true
I1010 13:33:24.246641  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1010 13:33:24.246664  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" with version 32764
I1010 13:33:24.246674  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1010 13:33:24.246686  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind (uid: c460d561-c474-4b9e-8751-f626c6de43e2)", boundByController: true
I1010 13:33:24.246693  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: claim is already correctly bound
I1010 13:33:24.246699  110878 pv_controller.go:933] binding volume "pv-w-canbind" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.246707  110878 pv_controller.go:831] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.246718  110878 pv_controller.go:843] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.246725  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1010 13:33:24.246730  110878 pv_controller.go:782] updating PersistentVolume[pv-w-canbind]: phase Bound already set
I1010 13:33:24.246737  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: binding to "pv-w-canbind"
I1010 13:33:24.246855  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind]: already bound to "pv-w-canbind"
I1010 13:33:24.246871  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind] status: set phase Bound
I1010 13:33:24.246884  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind] status: phase Bound already set
I1010 13:33:24.246892  110878 pv_controller.go:959] volume "pv-w-canbind" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind"
I1010 13:33:24.246904  110878 pv_controller.go:960] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind (uid: c460d561-c474-4b9e-8751-f626c6de43e2)", boundByController: true
I1010 13:33:24.246917  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1010 13:33:24.337122  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (2.744614ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.436228  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (1.814429ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.537440  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (1.959344ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.635930  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (1.791966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.735599  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (1.664589ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.798346  110878 cache.go:669] Couldn't expire cache for pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind. Binding is still in progress.
I1010 13:33:24.836371  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (2.4854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:24.936191  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (1.788909ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:25.035702  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (1.803879ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:25.135957  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (1.99905ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:25.238471  110878 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind" are bound
I1010 13:33:25.238564  110878 factory.go:710] Attempting to bind pod-w-canbind to node-1
I1010 13:33:25.240718  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (6.834397ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:25.241676  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind/binding: (2.577379ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.242177  110878 scheduler.go:730] pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 13:33:25.244484  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (1.785316ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.336083  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind: (2.232885ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.337896  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind: (1.318614ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.339435  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.179992ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.347799  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (7.9596ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.351947  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (3.787952ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.352359  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" deleted
I1010 13:33:25.352409  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32762
I1010 13:33:25.352631  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind (uid: c460d561-c474-4b9e-8751-f626c6de43e2)", boundByController: true
I1010 13:33:25.352653  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind
I1010 13:33:25.353669  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind: (792.564µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:25.353904  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind not found
I1010 13:33:25.353929  110878 pv_controller.go:577] volume "pv-w-canbind" is released and reclaim policy "Retain" will be executed
I1010 13:33:25.353942  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind]: set phase Released
I1010 13:33:25.356626  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (2.458844ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:25.357685  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32834
I1010 13:33:25.357715  110878 pv_controller.go:800] volume "pv-w-canbind" entered phase "Released"
I1010 13:33:25.357727  110878 pv_controller.go:1013] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I1010 13:33:25.357768  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 32834
I1010 13:33:25.357794  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind]: phase: Released, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind (uid: c460d561-c474-4b9e-8751-f626c6de43e2)", boundByController: true
I1010 13:33:25.357808  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind
I1010 13:33:25.357831  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind not found
I1010 13:33:25.357837  110878 pv_controller.go:1013] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I1010 13:33:25.360416  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (7.988346ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.360963  110878 pv_controller_base.go:216] volume "pv-w-canbind" deleted
I1010 13:33:25.361010  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind" was already processed
I1010 13:33:25.368707  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.634149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.368993  110878 volume_binding_test.go:191] Running test wait cannot bind
I1010 13:33:25.370830  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.513665ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.372912  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.568465ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.375192  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind", version 32841
I1010 13:33:25.375241  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:25.375264  110878 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind]: no volume found
I1010 13:33:25.375285  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind] status: set phase Pending
I1010 13:33:25.375296  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind] status: phase Pending already set
I1010 13:33:25.375494  110878 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4", Name:"pvc-w-cannotbind", UID:"995aa53a-beb1-42ce-bcbe-00d86081b56b", APIVersion:"v1", ResourceVersion:"32841", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 13:33:25.376405  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.997967ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.378932  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (2.007999ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.379966  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (4.426614ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:25.380498  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind
I1010 13:33:25.380527  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind
I1010 13:33:25.380705  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind" on node "node-2"
I1010 13:33:25.380705  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind" on node "node-1"
I1010 13:33:25.380738  110878 scheduler_binder.go:725] storage class "wait-zz4w" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind" does not support dynamic provisioning
I1010 13:33:25.380765  110878 scheduler_binder.go:725] storage class "wait-zz4w" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind" does not support dynamic provisioning
I1010 13:33:25.380825  110878 factory.go:645] Unable to schedule volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1010 13:33:25.380870  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1010 13:33:25.383840  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind/status: (2.5266ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:25.388523  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (5.630452ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.388556  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind: (4.36053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
I1010 13:33:25.388770  110878 generic_scheduler.go:325] Preemption will not help schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind on any node.
I1010 13:33:25.388915  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind
I1010 13:33:25.388932  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind
I1010 13:33:25.389253  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind" on node "node-1"
I1010 13:33:25.389283  110878 scheduler_binder.go:725] storage class "wait-zz4w" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind" does not support dynamic provisioning
I1010 13:33:25.389332  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind" on node "node-2"
I1010 13:33:25.389354  110878 scheduler_binder.go:725] storage class "wait-zz4w" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind" does not support dynamic provisioning
I1010 13:33:25.389414  110878 factory.go:645] Unable to schedule volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1010 13:33:25.389450  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1010 13:33:25.389724  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind: (8.391381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.393814  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind: (2.889052ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.393967  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (3.570611ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I1010 13:33:25.394095  110878 generic_scheduler.go:325] Preemption will not help schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind on any node.
I1010 13:33:25.394137  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind: (3.211872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44528]
E1010 13:33:25.394478  110878 factory.go:685] pod is already present in unschedulableQ
I1010 13:33:25.482869  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind: (2.132845ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.485197  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-cannotbind: (1.482782ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.490783  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind
I1010 13:33:25.490830  110878 scheduler.go:594] Skip schedule deleting pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind
I1010 13:33:25.492919  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (6.843804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.493184  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.037894ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.498994  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (4.819682ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.499433  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind" deleted
I1010 13:33:25.501175  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.780571ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.510806  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.849348ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.511081  110878 volume_binding_test.go:191] Running test wait can bind two
I1010 13:33:25.513880  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.353163ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.516199  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.639892ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.519005  110878 httplog.go:90] POST /api/v1/persistentvolumes: (2.370305ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.519799  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-2", version 32860
I1010 13:33:25.519832  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:25.519854  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1010 13:33:25.519863  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1010 13:33:25.521467  110878 httplog.go:90] POST /api/v1/persistentvolumes: (1.947234ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.522732  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (2.602812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.523095  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 32862
I1010 13:33:25.523134  110878 pv_controller.go:800] volume "pv-w-canbind-2" entered phase "Available"
I1010 13:33:25.523160  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-3", version 32861
I1010 13:33:25.523172  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:25.523187  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1010 13:33:25.523191  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1010 13:33:25.524669  110878 httplog.go:90] POST /api/v1/persistentvolumes: (2.626248ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.525843  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (2.319704ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.526448  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 32864
I1010 13:33:25.528183  110878 pv_controller.go:800] volume "pv-w-canbind-3" entered phase "Available"
I1010 13:33:25.527630  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.035645ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.528499  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2", version 32865
I1010 13:33:25.528593  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:25.528659  110878 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: no volume found
I1010 13:33:25.528726  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2] status: set phase Pending
I1010 13:33:25.528914  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2] status: phase Pending already set
I1010 13:33:25.529291  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 32862
I1010 13:33:25.529424  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "", boundByController: false
I1010 13:33:25.529504  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1010 13:33:25.529552  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1010 13:33:25.529621  110878 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-2]: phase Available already set
I1010 13:33:25.529698  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-5", version 32863
I1010 13:33:25.528849  110878 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4", Name:"pvc-w-canbind-2", UID:"05a0b0cb-50d0-4733-ada9-6f2ebcd2549f", APIVersion:"v1", ResourceVersion:"32865", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 13:33:25.529771  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:25.529965  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1010 13:33:25.529973  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1010 13:33:25.531724  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.368474ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.532996  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3", version 32866
I1010 13:33:25.533043  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:25.533078  110878 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: no volume found
I1010 13:33:25.533102  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3] status: set phase Pending
I1010 13:33:25.533121  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3] status: phase Pending already set
I1010 13:33:25.533146  110878 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4", Name:"pvc-w-canbind-3", UID:"d7052efe-4aea-4e9b-8629-60db1a548ffe", APIVersion:"v1", ResourceVersion:"32866", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 13:33:25.533381  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (3.36164ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.533654  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-5/status: (3.077722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45026]
I1010 13:33:25.533911  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 32868
I1010 13:33:25.533956  110878 pv_controller.go:800] volume "pv-w-canbind-5" entered phase "Available"
I1010 13:33:25.533981  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 32864
I1010 13:33:25.534015  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "", boundByController: false
I1010 13:33:25.534036  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1010 13:33:25.534043  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1010 13:33:25.534052  110878 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-3]: phase Available already set
I1010 13:33:25.534066  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 32868
I1010 13:33:25.534080  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Available, bound to: "", boundByController: false
I1010 13:33:25.534101  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1010 13:33:25.534113  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1010 13:33:25.534125  110878 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-5]: phase Available already set
I1010 13:33:25.535552  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (2.315971ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.536135  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind-2
I1010 13:33:25.536165  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind-2
I1010 13:33:25.536321  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (1.945335ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45026]
I1010 13:33:25.536721  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind-2", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" on node "node-1"
I1010 13:33:25.536797  110878 scheduler_binder.go:725] storage class "wait-p8gp" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" does not support dynamic provisioning
I1010 13:33:25.536808  110878 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind-2" on node "node-2"
I1010 13:33:25.536904  110878 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind-2", node "node-2"
I1010 13:33:25.537051  110878 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-2", version 32862
I1010 13:33:25.537108  110878 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-3", version 32864
I1010 13:33:25.537292  110878 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind-2", node "node-2"
I1010 13:33:25.537339  110878 scheduler_binder.go:404] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" bound to volume "pv-w-canbind-2"
I1010 13:33:25.540104  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2: (2.448412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.540415  110878 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-2]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.540528  110878 scheduler_binder.go:404] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" bound to volume "pv-w-canbind-3"
I1010 13:33:25.540740  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 32871
I1010 13:33:25.540818  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 (uid: 05a0b0cb-50d0-4733-ada9-6f2ebcd2549f)", boundByController: true
I1010 13:33:25.540831  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2
I1010 13:33:25.540851  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:25.540866  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:25.540937  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" with version 32865
I1010 13:33:25.540951  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:25.541013  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: volume "pv-w-canbind-2" found: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 (uid: 05a0b0cb-50d0-4733-ada9-6f2ebcd2549f)", boundByController: true
I1010 13:33:25.541029  110878 pv_controller.go:933] binding volume "pv-w-canbind-2" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.541055  110878 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.541108  110878 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.541133  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I1010 13:33:25.543507  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (2.125979ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:25.543833  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3: (2.395167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.543877  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 32872
I1010 13:33:25.543901  110878 pv_controller.go:800] volume "pv-w-canbind-2" entered phase "Bound"
I1010 13:33:25.543923  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: binding to "pv-w-canbind-2"
I1010 13:33:25.543942  110878 pv_controller.go:903] volume "pv-w-canbind-2" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.544078  110878 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-3]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.544301  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 32872
I1010 13:33:25.544336  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 (uid: 05a0b0cb-50d0-4733-ada9-6f2ebcd2549f)", boundByController: true
I1010 13:33:25.544349  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2
I1010 13:33:25.544443  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:25.544468  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:25.544498  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 32873
I1010 13:33:25.544519  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 (uid: d7052efe-4aea-4e9b-8629-60db1a548ffe)", boundByController: true
I1010 13:33:25.544529  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3
I1010 13:33:25.544543  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:25.544557  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:25.546625  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-2: (2.376101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.547066  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" with version 32874
I1010 13:33:25.547123  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: bound to "pv-w-canbind-2"
I1010 13:33:25.547136  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2] status: set phase Bound
I1010 13:33:25.553155  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-2/status: (5.60284ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.553543  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" with version 32875
I1010 13:33:25.553572  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" entered phase "Bound"
I1010 13:33:25.553596  110878 pv_controller.go:959] volume "pv-w-canbind-2" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.553638  110878 pv_controller.go:960] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 (uid: 05a0b0cb-50d0-4733-ada9-6f2ebcd2549f)", boundByController: true
I1010 13:33:25.553654  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1010 13:33:25.553690  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" with version 32866
I1010 13:33:25.553704  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:25.553768  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: volume "pv-w-canbind-3" found: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 (uid: d7052efe-4aea-4e9b-8629-60db1a548ffe)", boundByController: true
I1010 13:33:25.553780  110878 pv_controller.go:933] binding volume "pv-w-canbind-3" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.553791  110878 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.553849  110878 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.553860  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I1010 13:33:25.557022  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (1.903449ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.557244  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 32876
I1010 13:33:25.557274  110878 pv_controller.go:800] volume "pv-w-canbind-3" entered phase "Bound"
I1010 13:33:25.557286  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: binding to "pv-w-canbind-3"
I1010 13:33:25.557299  110878 pv_controller.go:903] volume "pv-w-canbind-3" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.557454  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 32876
I1010 13:33:25.557489  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 (uid: d7052efe-4aea-4e9b-8629-60db1a548ffe)", boundByController: true
I1010 13:33:25.557502  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3
I1010 13:33:25.557520  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:25.557536  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:25.559511  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-3: (1.873461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.559906  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" with version 32877
I1010 13:33:25.559947  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: bound to "pv-w-canbind-3"
I1010 13:33:25.559959  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3] status: set phase Bound
I1010 13:33:25.562181  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-3/status: (2.003527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.562513  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" with version 32878
I1010 13:33:25.562566  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" entered phase "Bound"
I1010 13:33:25.562583  110878 pv_controller.go:959] volume "pv-w-canbind-3" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.562611  110878 pv_controller.go:960] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 (uid: d7052efe-4aea-4e9b-8629-60db1a548ffe)", boundByController: true
I1010 13:33:25.562650  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1010 13:33:25.562688  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" with version 32875
I1010 13:33:25.562703  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1010 13:33:25.562795  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: volume "pv-w-canbind-2" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 (uid: 05a0b0cb-50d0-4733-ada9-6f2ebcd2549f)", boundByController: true
I1010 13:33:25.562810  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: claim is already correctly bound
I1010 13:33:25.562821  110878 pv_controller.go:933] binding volume "pv-w-canbind-2" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.562831  110878 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.562855  110878 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.562896  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I1010 13:33:25.562920  110878 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-2]: phase Bound already set
I1010 13:33:25.562929  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: binding to "pv-w-canbind-2"
I1010 13:33:25.562984  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2]: already bound to "pv-w-canbind-2"
I1010 13:33:25.563001  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2] status: set phase Bound
I1010 13:33:25.563117  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2] status: phase Bound already set
I1010 13:33:25.563133  110878 pv_controller.go:959] volume "pv-w-canbind-2" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2"
I1010 13:33:25.563154  110878 pv_controller.go:960] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 (uid: 05a0b0cb-50d0-4733-ada9-6f2ebcd2549f)", boundByController: true
I1010 13:33:25.563235  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1010 13:33:25.563328  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" with version 32878
I1010 13:33:25.563345  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1010 13:33:25.563384  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: volume "pv-w-canbind-3" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 (uid: d7052efe-4aea-4e9b-8629-60db1a548ffe)", boundByController: true
I1010 13:33:25.563405  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: claim is already correctly bound
I1010 13:33:25.563421  110878 pv_controller.go:933] binding volume "pv-w-canbind-3" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.563430  110878 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.563469  110878 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.563486  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I1010 13:33:25.563494  110878 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-3]: phase Bound already set
I1010 13:33:25.563501  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: binding to "pv-w-canbind-3"
I1010 13:33:25.563528  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3]: already bound to "pv-w-canbind-3"
I1010 13:33:25.563540  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3] status: set phase Bound
I1010 13:33:25.563555  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3] status: phase Bound already set
I1010 13:33:25.563565  110878 pv_controller.go:959] volume "pv-w-canbind-3" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3"
I1010 13:33:25.563584  110878 pv_controller.go:960] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 (uid: d7052efe-4aea-4e9b-8629-60db1a548ffe)", boundByController: true
I1010 13:33:25.563597  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1010 13:33:25.638848  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (2.105637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.738713  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (2.065577ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.798636  110878 cache.go:669] Couldn't expire cache for pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind-2. Binding is still in progress.
I1010 13:33:25.838977  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (2.313594ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:25.938452  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (1.86885ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.038485  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (1.851882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.138608  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (2.004028ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.238536  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (1.873415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.338573  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (1.96017ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.452227  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (15.580837ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.539172  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (2.469674ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.544399  110878 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind-2" are bound
I1010 13:33:26.544456  110878 factory.go:710] Attempting to bind pod-w-canbind-2 to node-2
I1010 13:33:26.547635  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2/binding: (2.777415ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.548062  110878 scheduler.go:730] pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-canbind-2 is bound successfully on node "node-2", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 13:33:26.551406  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.548894ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.638924  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-canbind-2: (2.268058ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.641015  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-2: (1.407644ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.642456  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-3: (935.786µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.644203  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-2: (1.327402ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.645801  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-3: (1.293318ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.647349  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-5: (1.03149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.654678  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (6.816287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.659658  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" deleted
I1010 13:33:26.659700  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 32872
I1010 13:33:26.659733  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 (uid: 05a0b0cb-50d0-4733-ada9-6f2ebcd2549f)", boundByController: true
I1010 13:33:26.659788  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2
I1010 13:33:26.661615  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-2: (1.49716ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.661877  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 not found
I1010 13:33:26.661898  110878 pv_controller.go:577] volume "pv-w-canbind-2" is released and reclaim policy "Retain" will be executed
I1010 13:33:26.661910  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-2]: set phase Released
I1010 13:33:26.664074  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (1.955415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.664402  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 32944
I1010 13:33:26.664423  110878 pv_controller.go:800] volume "pv-w-canbind-2" entered phase "Released"
I1010 13:33:26.664444  110878 pv_controller.go:1013] reclaimVolume[pv-w-canbind-2]: policy is Retain, nothing to do
I1010 13:33:26.664464  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 32944
I1010 13:33:26.664484  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Released, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 (uid: 05a0b0cb-50d0-4733-ada9-6f2ebcd2549f)", boundByController: true
I1010 13:33:26.664497  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2
I1010 13:33:26.664519  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2 not found
I1010 13:33:26.664526  110878 pv_controller.go:1013] reclaimVolume[pv-w-canbind-2]: policy is Retain, nothing to do
I1010 13:33:26.664796  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (9.620743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.666089  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" deleted
I1010 13:33:26.666159  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 32876
I1010 13:33:26.666189  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 (uid: d7052efe-4aea-4e9b-8629-60db1a548ffe)", boundByController: true
I1010 13:33:26.666199  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3
I1010 13:33:26.667465  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-3: (909.01µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.667905  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 not found
I1010 13:33:26.667934  110878 pv_controller.go:577] volume "pv-w-canbind-3" is released and reclaim policy "Retain" will be executed
I1010 13:33:26.667944  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-3]: set phase Released
I1010 13:33:26.671512  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (3.153717ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.671852  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 32947
I1010 13:33:26.671908  110878 pv_controller.go:800] volume "pv-w-canbind-3" entered phase "Released"
I1010 13:33:26.671922  110878 pv_controller.go:1013] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I1010 13:33:26.673151  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 32947
I1010 13:33:26.673351  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Released, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 (uid: d7052efe-4aea-4e9b-8629-60db1a548ffe)", boundByController: true
I1010 13:33:26.673631  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3
I1010 13:33:26.673898  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3 not found
I1010 13:33:26.673920  110878 pv_controller.go:1013] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I1010 13:33:26.674120  110878 pv_controller_base.go:216] volume "pv-w-canbind-2" deleted
I1010 13:33:26.674313  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-2" was already processed
I1010 13:33:26.677118  110878 pv_controller_base.go:216] volume "pv-w-canbind-3" deleted
I1010 13:33:26.677169  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-3" was already processed
I1010 13:33:26.679410  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (14.033117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.681280  110878 pv_controller_base.go:216] volume "pv-w-canbind-5" deleted
I1010 13:33:26.693945  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (13.934653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.694578  110878 volume_binding_test.go:191] Running test wait cannot bind two
I1010 13:33:26.696945  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.069593ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.698984  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.527645ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.702043  110878 httplog.go:90] POST /api/v1/persistentvolumes: (2.338233ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.702622  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-1", version 32959
I1010 13:33:26.702651  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:26.702673  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1010 13:33:26.702681  110878 pv_controller.go:779] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1010 13:33:26.704471  110878 httplog.go:90] POST /api/v1/persistentvolumes: (1.908199ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.704888  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-1/status: (1.963325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.705164  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 32961
I1010 13:33:26.705290  110878 pv_controller.go:800] volume "pv-w-cannotbind-1" entered phase "Available"
I1010 13:33:26.705561  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-2", version 32960
I1010 13:33:26.705592  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:26.705612  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1010 13:33:26.705618  110878 pv_controller.go:779] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1010 13:33:26.708094  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.487934ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.708318  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2/status: (2.450892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.708641  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 32964
I1010 13:33:26.708665  110878 pv_controller.go:800] volume "pv-w-cannotbind-2" entered phase "Available"
I1010 13:33:26.708686  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 32961
I1010 13:33:26.708701  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Available, bound to: "", boundByController: false
I1010 13:33:26.708721  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1010 13:33:26.708726  110878 pv_controller.go:779] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1010 13:33:26.708735  110878 pv_controller.go:782] updating PersistentVolume[pv-w-cannotbind-1]: phase Available already set
I1010 13:33:26.709020  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 32964
I1010 13:33:26.709039  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Available, bound to: "", boundByController: false
I1010 13:33:26.709060  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1010 13:33:26.709066  110878 pv_controller.go:779] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1010 13:33:26.709074  110878 pv_controller.go:782] updating PersistentVolume[pv-w-cannotbind-2]: phase Available already set
I1010 13:33:26.709960  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-1", version 32963
I1010 13:33:26.710067  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-1]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:26.710131  110878 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-1]: no volume found
I1010 13:33:26.710194  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-1] status: set phase Pending
I1010 13:33:26.710279  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-1] status: phase Pending already set
I1010 13:33:26.710360  110878 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4", Name:"pvc-w-cannotbind-1", UID:"deaf7cbe-e61c-4f73-bdb7-9ed1d89a2909", APIVersion:"v1", ResourceVersion:"32963", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 13:33:26.711771  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.441635ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.712210  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2", version 32965
I1010 13:33:26.712263  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:26.712304  110878 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2]: no volume found
I1010 13:33:26.712330  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2] status: set phase Pending
I1010 13:33:26.712354  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2] status: phase Pending already set
I1010 13:33:26.712375  110878 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4", Name:"pvc-w-cannotbind-2", UID:"e067acfd-ebc6-4489-82ca-89faf4d3d3fe", APIVersion:"v1", ResourceVersion:"32965", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 13:33:26.714376  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (1.857875ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.714820  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind-2
I1010 13:33:26.716676  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind-2
I1010 13:33:26.715904  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (4.980132ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.717119  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind-2", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2" on node "node-2"
I1010 13:33:26.717276  110878 scheduler_binder.go:725] storage class "wait-mts4" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2" does not support dynamic provisioning
I1010 13:33:26.717540  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind-2", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2" on node "node-1"
I1010 13:33:26.717665  110878 scheduler_binder.go:725] storage class "wait-mts4" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2" does not support dynamic provisioning
I1010 13:33:26.717801  110878 factory.go:645] Unable to schedule volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1010 13:33:26.718041  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I1010 13:33:26.719385  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind-2: (957.867µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.722948  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind-2/status: (3.790074ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.722954  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.699419ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.724794  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind-2: (1.079824ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.724975  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (5.130888ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45196]
I1010 13:33:26.725041  110878 generic_scheduler.go:325] Preemption will not help schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind-2 on any node.
I1010 13:33:26.822398  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-cannotbind-2: (2.943437ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.826271  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-cannotbind-1: (1.533273ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.828517  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-cannotbind-2: (1.638228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.830780  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-1: (1.599216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.832551  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-2: (1.198437ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.839050  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind-2
I1010 13:33:26.839290  110878 scheduler.go:594] Skip schedule deleting pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-cannotbind-2
I1010 13:33:26.840135  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (7.061198ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.847659  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (7.922789ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.848171  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-1" deleted
I1010 13:33:26.851550  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (10.432482ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.854651  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-cannotbind-2" deleted
I1010 13:33:26.856924  110878 pv_controller_base.go:216] volume "pv-w-cannotbind-1" deleted
I1010 13:33:26.859712  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (7.726004ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.861094  110878 pv_controller_base.go:216] volume "pv-w-cannotbind-2" deleted
I1010 13:33:26.869165  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.539769ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.869879  110878 volume_binding_test.go:191] Running test mix immediate and wait
I1010 13:33:26.872448  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.186843ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.874878  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.946991ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.877476  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-4", version 32990
I1010 13:33:26.877506  110878 httplog.go:90] POST /api/v1/persistentvolumes: (2.185867ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.877508  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:26.877533  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1010 13:33:26.877539  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1010 13:33:26.879899  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (2.052129ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.880286  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32991
I1010 13:33:26.880342  110878 pv_controller.go:800] volume "pv-w-canbind-4" entered phase "Available"
I1010 13:33:26.880697  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32991
I1010 13:33:26.880733  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "", boundByController: false
I1010 13:33:26.880778  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1010 13:33:26.880786  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1010 13:33:26.880795  110878 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-4]: phase Available already set
I1010 13:33:26.881118  110878 httplog.go:90] POST /api/v1/persistentvolumes: (2.913746ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.882061  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind-2", version 32992
I1010 13:33:26.882233  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:26.882364  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1010 13:33:26.882438  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1010 13:33:26.884095  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.20214ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.884102  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4", version 32993
I1010 13:33:26.885359  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:26.885475  110878 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: no volume found
I1010 13:33:26.885554  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4] status: set phase Pending
I1010 13:33:26.885632  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4] status: phase Pending already set
I1010 13:33:26.885897  110878 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4", Name:"pvc-w-canbind-4", UID:"cbc4f39f-3314-4fdb-8ed1-2aca0b314ee5", APIVersion:"v1", ResourceVersion:"32993", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1010 13:33:26.884869  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (2.008645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.886500  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32994
I1010 13:33:26.886533  110878 pv_controller.go:800] volume "pv-i-canbind-2" entered phase "Available"
I1010 13:33:26.886556  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32994
I1010 13:33:26.886572  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "", boundByController: false
I1010 13:33:26.886599  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1010 13:33:26.886612  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1010 13:33:26.886617  110878 pv_controller.go:782] updating PersistentVolume[pv-i-canbind-2]: phase Available already set
I1010 13:33:26.889037  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.970064ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.890160  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.18551ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I1010 13:33:26.890592  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2", version 32996
I1010 13:33:26.890625  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:26.890653  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Available, bound to: "", boundByController: false
I1010 13:33:26.890664  110878 pv_controller.go:933] binding volume "pv-i-canbind-2" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2"
I1010 13:33:26.890678  110878 pv_controller.go:831] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2"
I1010 13:33:26.890705  110878 pv_controller.go:851] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" bound to volume "pv-i-canbind-2"
I1010 13:33:26.893030  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (2.091987ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.893938  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32998
I1010 13:33:26.893986  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 (uid: 1ed9c9e3-9eee-4ec1-86f6-4ed19783a09d)", boundByController: true
I1010 13:33:26.894004  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2
I1010 13:33:26.894016  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:26.894016  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2: (3.065972ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45200]
I1010 13:33:26.894027  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:26.894323  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32998
I1010 13:33:26.894348  110878 pv_controller.go:864] updating PersistentVolume[pv-i-canbind-2]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2"
I1010 13:33:26.894372  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1010 13:33:26.895144  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound
I1010 13:33:26.895166  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound
E1010 13:33:26.895411  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:26.895438  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
I1010 13:33:26.897713  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound/status: (2.053164ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
E1010 13:33:26.898166  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:26.898373  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound
I1010 13:33:26.898386  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound
E1010 13:33:26.898538  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:26.898556  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
E1010 13:33:26.898565  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:26.900935  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (4.123343ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45204]
I1010 13:33:26.902540  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (3.683052ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.902947  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (3.25403ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45206]
I1010 13:33:26.903564  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33000
I1010 13:33:26.903614  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 (uid: 1ed9c9e3-9eee-4ec1-86f6-4ed19783a09d)", boundByController: true
I1010 13:33:26.903627  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2
I1010 13:33:26.903644  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:26.903660  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:26.904337  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (9.750154ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45200]
I1010 13:33:26.904762  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33000
I1010 13:33:26.904806  110878 pv_controller.go:800] volume "pv-i-canbind-2" entered phase "Bound"
I1010 13:33:26.904822  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1010 13:33:26.904838  110878 pv_controller.go:903] volume "pv-i-canbind-2" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2"
I1010 13:33:26.905211  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (8.239524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
E1010 13:33:26.905590  110878 factory.go:685] pod is already present in the backoffQ
I1010 13:33:26.906925  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-canbind-2: (1.80724ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I1010 13:33:26.907329  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" with version 33004
I1010 13:33:26.907362  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2]: bound to "pv-i-canbind-2"
I1010 13:33:26.907374  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2] status: set phase Bound
I1010 13:33:26.909203  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-canbind-2/status: (1.467928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:26.909788  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" with version 33005
I1010 13:33:26.909809  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" entered phase "Bound"
I1010 13:33:26.909825  110878 pv_controller.go:959] volume "pv-i-canbind-2" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2"
I1010 13:33:26.909841  110878 pv_controller.go:960] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 (uid: 1ed9c9e3-9eee-4ec1-86f6-4ed19783a09d)", boundByController: true
I1010 13:33:26.909851  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1010 13:33:26.909876  110878 pv_controller_base.go:533] storeObjectUpdate: ignoring claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" version 33004
I1010 13:33:26.909988  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" with version 33005
I1010 13:33:26.910013  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2]: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1010 13:33:26.910037  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 (uid: 1ed9c9e3-9eee-4ec1-86f6-4ed19783a09d)", boundByController: true
I1010 13:33:26.910093  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2]: claim is already correctly bound
I1010 13:33:26.910131  110878 pv_controller.go:933] binding volume "pv-i-canbind-2" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2"
I1010 13:33:26.910202  110878 pv_controller.go:831] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2"
I1010 13:33:26.910344  110878 pv_controller.go:843] updating PersistentVolume[pv-i-canbind-2]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2"
I1010 13:33:26.910491  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1010 13:33:26.910557  110878 pv_controller.go:782] updating PersistentVolume[pv-i-canbind-2]: phase Bound already set
I1010 13:33:26.910612  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1010 13:33:26.910693  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2]: already bound to "pv-i-canbind-2"
I1010 13:33:26.910767  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2] status: set phase Bound
I1010 13:33:26.910880  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2] status: phase Bound already set
I1010 13:33:26.910959  110878 pv_controller.go:959] volume "pv-i-canbind-2" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2"
I1010 13:33:26.911076  110878 pv_controller.go:960] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 (uid: 1ed9c9e3-9eee-4ec1-86f6-4ed19783a09d)", boundByController: true
I1010 13:33:26.911157  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1010 13:33:26.996386  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.040977ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.096614  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.290545ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.196680  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.348541ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.295890  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.649635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.396263  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.97595ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.496340  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.09919ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.596641  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.98728ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.697124  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.796838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.796580  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.102373ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.896337  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.980672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:27.997259  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.020242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.096402  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.029124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.196153  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.882764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.296208  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.916294ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.396032  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.77776ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.495940  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.739133ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.596249  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.883235ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.696873  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.533943ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.796349  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.022544ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.799451  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound
I1010 13:33:28.799475  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound
I1010 13:33:28.799707  110878 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound" match with Node "node-1"
I1010 13:33:28.799885  110878 scheduler_binder.go:653] PersistentVolume "pv-i-canbind-2", Node "node-2" mismatch for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound": No matching NodeSelectorTerms
I1010 13:33:28.799911  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" on node "node-2"
I1010 13:33:28.799926  110878 scheduler_binder.go:725] storage class "wait-5cf6" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" does not support dynamic provisioning
I1010 13:33:28.800286  110878 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound" on node "node-1"
I1010 13:33:28.800374  110878 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound", node "node-1"
I1010 13:33:28.800416  110878 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-4", version 32991
I1010 13:33:28.800466  110878 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound", node "node-1"
I1010 13:33:28.800488  110878 scheduler_binder.go:404] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" bound to volume "pv-w-canbind-4"
I1010 13:33:28.804464  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4: (3.433258ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.804667  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 33139
I1010 13:33:28.804705  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 (uid: cbc4f39f-3314-4fdb-8ed1-2aca0b314ee5)", boundByController: true
I1010 13:33:28.804710  110878 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-4]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.804717  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4
I1010 13:33:28.804765  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:28.804781  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:28.804815  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" with version 32993
I1010 13:33:28.804829  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:28.804863  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 (uid: cbc4f39f-3314-4fdb-8ed1-2aca0b314ee5)", boundByController: true
I1010 13:33:28.804875  110878 pv_controller.go:933] binding volume "pv-w-canbind-4" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.804886  110878 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.804901  110878 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.804911  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1010 13:33:28.807339  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (2.139761ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.807931  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 33140
I1010 13:33:28.807962  110878 pv_controller.go:800] volume "pv-w-canbind-4" entered phase "Bound"
I1010 13:33:28.807980  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1010 13:33:28.808000  110878 pv_controller.go:903] volume "pv-w-canbind-4" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.808456  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 33140
I1010 13:33:28.808488  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 (uid: cbc4f39f-3314-4fdb-8ed1-2aca0b314ee5)", boundByController: true
I1010 13:33:28.808509  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4
I1010 13:33:28.808542  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:28.810512  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:28.812636  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-4: (3.68895ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.813007  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" with version 33141
I1010 13:33:28.813033  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: bound to "pv-w-canbind-4"
I1010 13:33:28.813045  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4] status: set phase Bound
I1010 13:33:28.815422  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-4/status: (2.008954ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.815670  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" with version 33142
I1010 13:33:28.815727  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" entered phase "Bound"
I1010 13:33:28.815762  110878 pv_controller.go:959] volume "pv-w-canbind-4" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.815787  110878 pv_controller.go:960] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 (uid: cbc4f39f-3314-4fdb-8ed1-2aca0b314ee5)", boundByController: true
I1010 13:33:28.815825  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1010 13:33:28.815873  110878 pv_controller_base.go:533] storeObjectUpdate: ignoring claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" version 33141
I1010 13:33:28.816021  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" with version 33142
I1010 13:33:28.816049  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1010 13:33:28.816086  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 (uid: cbc4f39f-3314-4fdb-8ed1-2aca0b314ee5)", boundByController: true
I1010 13:33:28.816102  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: claim is already correctly bound
I1010 13:33:28.816112  110878 pv_controller.go:933] binding volume "pv-w-canbind-4" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.816120  110878 pv_controller.go:831] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.816151  110878 pv_controller.go:843] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.816161  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1010 13:33:28.816168  110878 pv_controller.go:782] updating PersistentVolume[pv-w-canbind-4]: phase Bound already set
I1010 13:33:28.816174  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1010 13:33:28.816195  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4]: already bound to "pv-w-canbind-4"
I1010 13:33:28.816204  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4] status: set phase Bound
I1010 13:33:28.816219  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4] status: phase Bound already set
I1010 13:33:28.816227  110878 pv_controller.go:959] volume "pv-w-canbind-4" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4"
I1010 13:33:28.816240  110878 pv_controller.go:960] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 (uid: cbc4f39f-3314-4fdb-8ed1-2aca0b314ee5)", boundByController: true
I1010 13:33:28.816249  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1010 13:33:28.896195  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.90245ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:28.997064  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.791752ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.095861  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.594346ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.196129  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.824278ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.296207  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.98176ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.395887  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.772483ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.499123  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.755432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.596453  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.07542ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.696343  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (1.973762ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.796424  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.15063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.799391  110878 cache.go:669] Couldn't expire cache for pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound. Binding is still in progress.
I1010 13:33:29.805000  110878 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound" are bound
I1010 13:33:29.805063  110878 factory.go:710] Attempting to bind pod-mix-bound to node-1
I1010 13:33:29.808127  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound/binding: (2.771887ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.808406  110878 scheduler.go:730] pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-mix-bound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 13:33:29.810548  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (1.762039ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.896543  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-mix-bound: (2.244523ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.898095  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-4: (1.054218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.899330  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-canbind-2: (872.717µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.900646  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-4: (966.487µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.902217  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind-2: (1.012127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.907682  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (5.049225ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.913794  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" deleted
I1010 13:33:29.913862  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33000
I1010 13:33:29.913899  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 (uid: 1ed9c9e3-9eee-4ec1-86f6-4ed19783a09d)", boundByController: true
I1010 13:33:29.913911  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2
I1010 13:33:29.916498  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-canbind-2: (2.334768ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45204]
I1010 13:33:29.916818  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 not found
I1010 13:33:29.916850  110878 pv_controller.go:577] volume "pv-i-canbind-2" is released and reclaim policy "Retain" will be executed
I1010 13:33:29.916862  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind-2]: set phase Released
I1010 13:33:29.919260  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (2.136949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45204]
I1010 13:33:29.919676  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33261
I1010 13:33:29.919693  110878 pv_controller.go:800] volume "pv-i-canbind-2" entered phase "Released"
I1010 13:33:29.919701  110878 pv_controller.go:1013] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I1010 13:33:29.919953  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (11.85197ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.919987  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" deleted
I1010 13:33:29.920008  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 33140
I1010 13:33:29.920031  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 (uid: cbc4f39f-3314-4fdb-8ed1-2aca0b314ee5)", boundByController: true
I1010 13:33:29.920039  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4
I1010 13:33:29.922411  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-canbind-4: (2.244487ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45204]
I1010 13:33:29.922728  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 not found
I1010 13:33:29.922782  110878 pv_controller.go:577] volume "pv-w-canbind-4" is released and reclaim policy "Retain" will be executed
I1010 13:33:29.922796  110878 pv_controller.go:779] updating PersistentVolume[pv-w-canbind-4]: set phase Released
I1010 13:33:29.925290  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (2.256878ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45204]
I1010 13:33:29.927720  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 33262
I1010 13:33:29.927780  110878 pv_controller.go:800] volume "pv-w-canbind-4" entered phase "Released"
I1010 13:33:29.927790  110878 pv_controller.go:1013] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1010 13:33:29.927817  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33261
I1010 13:33:29.927841  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Released, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 (uid: 1ed9c9e3-9eee-4ec1-86f6-4ed19783a09d)", boundByController: true
I1010 13:33:29.927868  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2
I1010 13:33:29.927890  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2 not found
I1010 13:33:29.927897  110878 pv_controller.go:1013] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I1010 13:33:29.927912  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 33262
I1010 13:33:29.927940  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Released, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 (uid: cbc4f39f-3314-4fdb-8ed1-2aca0b314ee5)", boundByController: true
I1010 13:33:29.927952  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4
I1010 13:33:29.927971  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4 not found
I1010 13:33:29.927977  110878 pv_controller.go:1013] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1010 13:33:29.932819  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (12.27952ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.939282  110878 pv_controller_base.go:216] volume "pv-i-canbind-2" deleted
I1010 13:33:29.939309  110878 pv_controller_base.go:216] volume "pv-w-canbind-4" deleted
I1010 13:33:29.939601  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind-2" was already processed
I1010 13:33:29.939624  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-canbind-4" was already processed
I1010 13:33:29.945626  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (12.467639ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.945797  110878 volume_binding_test.go:191] Running test immediate can bind
I1010 13:33:29.947846  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.788322ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.949927  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.581669ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.952636  110878 httplog.go:90] POST /api/v1/persistentvolumes: (2.220078ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.953477  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind", version 33278
I1010 13:33:29.953514  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:29.953537  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1010 13:33:29.953545  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Available
I1010 13:33:29.955821  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.036334ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45204]
I1010 13:33:29.956499  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.612305ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.956697  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind", version 33282
I1010 13:33:29.956733  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:29.956829  110878 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: no volume found
I1010 13:33:29.956841  110878 pv_controller.go:1328] provisionClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: started
E1010 13:33:29.956866  110878 pv_controller.go:1333] error finding provisioning plugin for claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind: no volume plugin matched
I1010 13:33:29.956996  110878 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4", Name:"pvc-i-canbind", UID:"e96f67ea-e150-4c91-b698-d5cd7fa95ee3", APIVersion:"v1", ResourceVersion:"33282", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I1010 13:33:29.958374  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 33281
I1010 13:33:29.958406  110878 pv_controller.go:800] volume "pv-i-canbind" entered phase "Available"
I1010 13:33:29.958432  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 33281
I1010 13:33:29.958447  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I1010 13:33:29.958479  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1010 13:33:29.958502  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Available
I1010 13:33:29.958516  110878 pv_controller.go:782] updating PersistentVolume[pv-i-canbind]: phase Available already set
I1010 13:33:29.959451  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.010966ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45382]
I1010 13:33:29.960279  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (3.198542ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.960714  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
I1010 13:33:29.960731  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
I1010 13:33:29.960789  110878 factory.go:647] Unable to schedule volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind: possibly due to node not found: persistentvolumeclaim "pvc-i-canbind" not found; waiting
I1010 13:33:29.960817  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
I1010 13:33:29.964505  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (3.091059ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45204]
I1010 13:33:29.964896  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.695648ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45382]
I1010 13:33:29.964939  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind/status: (3.832383ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
E1010 13:33:29.965306  110878 scheduler.go:627] error selecting node for pod: persistentvolumeclaim "pvc-i-canbind" not found
I1010 13:33:29.965423  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
I1010 13:33:29.965436  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
E1010 13:33:29.965679  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:29.965710  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
I1010 13:33:29.968948  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind/status: (2.829669ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
E1010 13:33:29.969274  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:29.969536  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
I1010 13:33:29.969571  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
E1010 13:33:29.969720  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:29.969738  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
E1010 13:33:29.969776  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:29.972874  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.429481ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:29.973306  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (6.855674ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:29.973413  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (3.211341ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I1010 13:33:29.974581  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (8.431334ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45382]
E1010 13:33:29.974902  110878 factory.go:685] pod is already present in unschedulableQ
I1010 13:33:30.064930  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.526967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:30.163416  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.226267ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:30.263233  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.02332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:30.363678  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.362117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:30.463694  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.270299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:30.563622  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.236785ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:30.663572  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.248345ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:30.763319  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.811427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:30.862876  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.620539ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:30.963208  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.856838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.063728  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.311059ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.162997  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.70479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.264178  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.940325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.363049  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.765002ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.463012  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.686279ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.563136  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.93937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.663232  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.927098ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.764304  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.986604ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.863085  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.833302ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:31.963115  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.747859ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.066400  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (5.194108ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.162653  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.396332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.262914  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.568443ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.362826  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.547812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.464577  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.308553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.563066  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.739276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.663731  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.455523ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.763964  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.62534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.863120  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.791564ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:32.962982  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.802032ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.064526  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.207834ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.166404  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.589754ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.262939  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.664886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.362648  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.468413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.463051  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.798241ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.563596  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.308667ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.664639  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.668592ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.763600  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.92806ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.828244  110878 httplog.go:90] GET /api/v1/namespaces/default: (1.56455ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.832025  110878 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (3.303849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.833425  110878 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.02569ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.863048  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.8075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:33.963276  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.069572ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.063226  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.780448ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.163276  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.957222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.263442  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.206774ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.366277  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (4.397268ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.463077  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.764072ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.563036  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.711833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.663293  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.983283ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.763386  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.114971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.863148  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.880854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:34.964205  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.946785ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.063639  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.278787ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.165129  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.604193ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.263208  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.958311ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.363030  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.826513ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.462841  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.588524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.563177  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.899838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.663119  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.872726ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.763165  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.898533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.862970  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.727781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:35.962711  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.517832ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.062939  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.675373ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.163217  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.627799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.263977  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.332231ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.363384  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.240586ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.463055  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.79924ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.562629  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.46056ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.663117  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.892293ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.764466  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.093631ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.863376  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.153577ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:36.963904  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.951424ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.063064  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.817616ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.162601  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.432808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.263476  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.104815ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.363124  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.884862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.464384  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.279754ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.563306  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.095686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.663448  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.191947ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.764898  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.645638ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.862980  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.761504ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:37.963894  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.650328ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.067360  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (6.154686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.162903  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.769714ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.262965  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.751562ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.362448  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.314691ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.463705  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.435242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.564441  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.227552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.663130  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.889791ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.764359  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.163264ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.862694  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.57661ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:38.963878  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.6031ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:39.011274  110878 pv_controller_base.go:426] resyncing PV controller
I1010 13:33:39.011377  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 33281
I1010 13:33:39.011412  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I1010 13:33:39.011433  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1010 13:33:39.011439  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Available
I1010 13:33:39.011446  110878 pv_controller.go:782] updating PersistentVolume[pv-i-canbind]: phase Available already set
I1010 13:33:39.011472  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" with version 33282
I1010 13:33:39.011488  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:39.011520  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Available, bound to: "", boundByController: false
I1010 13:33:39.011529  110878 pv_controller.go:933] binding volume "pv-i-canbind" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind"
I1010 13:33:39.011537  110878 pv_controller.go:831] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind"
I1010 13:33:39.011574  110878 pv_controller.go:851] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" bound to volume "pv-i-canbind"
I1010 13:33:39.015379  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind: (3.298281ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:39.016077  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34613
I1010 13:33:39.016106  110878 pv_controller.go:864] updating PersistentVolume[pv-i-canbind]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind"
I1010 13:33:39.016119  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1010 13:33:39.018024  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
I1010 13:33:39.018043  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
E1010 13:33:39.018304  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:39.018327  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
E1010 13:33:39.018341  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:39.019249  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34613
I1010 13:33:39.019281  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind (uid: e96f67ea-e150-4c91-b698-d5cd7fa95ee3)", boundByController: true
I1010 13:33:39.019293  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind
I1010 13:33:39.019312  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:39.019328  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:39.022014  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.407074ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I1010 13:33:39.022851  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (6.298788ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:39.023032  110878 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events/pod-i-canbind.15cc4c6f11d54fb6: (3.511821ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.023278  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34614
I1010 13:33:39.023302  110878 pv_controller.go:800] volume "pv-i-canbind" entered phase "Bound"
I1010 13:33:39.023316  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: binding to "pv-i-canbind"
I1010 13:33:39.023332  110878 pv_controller.go:903] volume "pv-i-canbind" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind"
I1010 13:33:39.023389  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34614
I1010 13:33:39.023418  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind (uid: e96f67ea-e150-4c91-b698-d5cd7fa95ee3)", boundByController: true
I1010 13:33:39.023432  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind
I1010 13:33:39.023450  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:39.023465  110878 pv_controller.go:605] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1010 13:33:39.026150  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-canbind: (2.656892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45386]
I1010 13:33:39.026521  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" with version 34616
I1010 13:33:39.026554  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: bound to "pv-i-canbind"
I1010 13:33:39.026564  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind] status: set phase Bound
I1010 13:33:39.028381  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-canbind/status: (1.553941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.029351  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" with version 34617
I1010 13:33:39.029381  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" entered phase "Bound"
I1010 13:33:39.029400  110878 pv_controller.go:959] volume "pv-i-canbind" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind"
I1010 13:33:39.029424  110878 pv_controller.go:960] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind (uid: e96f67ea-e150-4c91-b698-d5cd7fa95ee3)", boundByController: true
I1010 13:33:39.029442  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1010 13:33:39.029472  110878 pv_controller_base.go:533] storeObjectUpdate: ignoring claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" version 34616
I1010 13:33:39.029503  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" with version 34617
I1010 13:33:39.029528  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1010 13:33:39.029549  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind (uid: e96f67ea-e150-4c91-b698-d5cd7fa95ee3)", boundByController: true
I1010 13:33:39.029562  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: claim is already correctly bound
I1010 13:33:39.029574  110878 pv_controller.go:933] binding volume "pv-i-canbind" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind"
I1010 13:33:39.029590  110878 pv_controller.go:831] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind"
I1010 13:33:39.029607  110878 pv_controller.go:843] updating PersistentVolume[pv-i-canbind]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind"
I1010 13:33:39.029617  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1010 13:33:39.029626  110878 pv_controller.go:782] updating PersistentVolume[pv-i-canbind]: phase Bound already set
I1010 13:33:39.029635  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: binding to "pv-i-canbind"
I1010 13:33:39.029651  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind]: already bound to "pv-i-canbind"
I1010 13:33:39.029659  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind] status: set phase Bound
I1010 13:33:39.029677  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind] status: phase Bound already set
I1010 13:33:39.029690  110878 pv_controller.go:959] volume "pv-i-canbind" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind"
I1010 13:33:39.029707  110878 pv_controller.go:960] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind (uid: e96f67ea-e150-4c91-b698-d5cd7fa95ee3)", boundByController: true
I1010 13:33:39.029719  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1010 13:33:39.063698  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.505706ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.163499  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.022419ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.263210  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.965591ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.362981  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.756431ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.465342  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.21915ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.563149  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.889253ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.663625  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.345164ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.766566  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (5.210365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.863645  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.308391ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:39.970695  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (9.399439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.063198  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.939334ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.163345  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.126551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.263312  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.044705ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.370004  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (8.618375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.464215  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.007572ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.563877  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.302985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.663009  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.783757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.764240  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.016607ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.863105  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.95833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:40.963251  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.02637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.064703  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (3.425895ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.164224  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (2.937653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.262823  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.615506ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.362733  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.546691ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.462913  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.712506ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.563155  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.793633ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.662845  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.601516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.762969  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.626947ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.804857  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
I1010 13:33:41.804894  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind
I1010 13:33:41.805125  110878 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind" match with Node "node-1"
I1010 13:33:41.805184  110878 scheduler_binder.go:653] PersistentVolume "pv-i-canbind", Node "node-2" mismatch for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind": No matching NodeSelectorTerms
I1010 13:33:41.805238  110878 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind", node "node-1"
I1010 13:33:41.805248  110878 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind", node "node-1": all PVCs bound and nothing to do
I1010 13:33:41.805287  110878 factory.go:710] Attempting to bind pod-i-canbind to node-1
I1010 13:33:41.809247  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind/binding: (3.347319ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.809735  110878 scheduler.go:730] pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 13:33:41.812705  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.515812ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.862856  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-canbind: (1.688949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.864561  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-canbind: (1.237991ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.866184  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind: (1.216991ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.874172  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (7.35635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.880906  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (5.423812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.881623  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" deleted
I1010 13:33:41.881670  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34614
I1010 13:33:41.881699  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind (uid: e96f67ea-e150-4c91-b698-d5cd7fa95ee3)", boundByController: true
I1010 13:33:41.881708  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind
I1010 13:33:41.883046  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-canbind: (1.127304ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I1010 13:33:41.883301  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind not found
I1010 13:33:41.883324  110878 pv_controller.go:577] volume "pv-i-canbind" is released and reclaim policy "Retain" will be executed
I1010 13:33:41.883336  110878 pv_controller.go:779] updating PersistentVolume[pv-i-canbind]: set phase Released
I1010 13:33:41.887380  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (3.283786ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I1010 13:33:41.887589  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 35212
I1010 13:33:41.887616  110878 pv_controller.go:800] volume "pv-i-canbind" entered phase "Released"
I1010 13:33:41.887627  110878 pv_controller.go:1013] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1010 13:33:41.887645  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 35212
I1010 13:33:41.887663  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-canbind]: phase: Released, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind (uid: e96f67ea-e150-4c91-b698-d5cd7fa95ee3)", boundByController: true
I1010 13:33:41.887675  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind
I1010 13:33:41.887705  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind not found
I1010 13:33:41.887712  110878 pv_controller.go:1013] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1010 13:33:41.889321  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (7.41425ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.889804  110878 pv_controller_base.go:216] volume "pv-i-canbind" deleted
I1010 13:33:41.890061  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-canbind" was already processed
I1010 13:33:41.898351  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.932216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.898578  110878 volume_binding_test.go:191] Running test immediate cannot bind
I1010 13:33:41.900803  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.955616ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.903672  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.279473ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.908378  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (4.306501ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.908995  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-cannotbind", version 35223
I1010 13:33:41.909024  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:41.909051  110878 pv_controller.go:305] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-cannotbind]: no volume found
I1010 13:33:41.909060  110878 pv_controller.go:1328] provisionClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-cannotbind]: started
E1010 13:33:41.909089  110878 pv_controller.go:1333] error finding provisioning plugin for claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-cannotbind: no volume plugin matched
I1010 13:33:41.909377  110878 event.go:262] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4", Name:"pvc-i-cannotbind", UID:"c46fa34c-2e0b-4a5f-b9f4-ae2aa9a24286", APIVersion:"v1", ResourceVersion:"35223", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I1010 13:33:41.912838  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (3.522526ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.914657  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-cannotbind
I1010 13:33:41.915558  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-cannotbind
I1010 13:33:41.914736  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (5.280421ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
E1010 13:33:41.916017  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-cannotbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:41.916108  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1010 13:33:41.919965  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.949445ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:41.921909  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-cannotbind: (2.96475ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:41.923355  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-cannotbind/status: (6.347709ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
E1010 13:33:41.923899  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:42.018302  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-cannotbind: (2.007746ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.021574  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-cannotbind: (2.675257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.027232  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-cannotbind
I1010 13:33:42.027288  110878 scheduler.go:594] Skip schedule deleting pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-cannotbind
I1010 13:33:42.029716  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.080793ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:42.030399  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (8.409683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.034161  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (3.281622ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.034795  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-cannotbind" deleted
I1010 13:33:42.036181  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.161082ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.043331  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.845366ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.043509  110878 volume_binding_test.go:191] Running test immediate pv prebound
I1010 13:33:42.045812  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.096867ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.048006  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.681772ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.050433  110878 httplog.go:90] POST /api/v1/persistentvolumes: (1.995913ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.050559  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-prebound", version 35303
I1010 13:33:42.050611  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 13:33:42.050620  110878 pv_controller.go:508] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound
I1010 13:33:42.050628  110878 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Available
I1010 13:33:42.053704  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound", version 35305
I1010 13:33:42.053734  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:42.053926  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Pending, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 13:33:42.053947  110878 pv_controller.go:933] binding volume "pv-i-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:42.054058  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.913016ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:42.054104  110878 pv_controller.go:831] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:42.054181  110878 pv_controller.go:851] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I1010 13:33:42.058398  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (3.464995ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:42.058862  110878 store.go:365] GuaranteedUpdate of /d82f0006-70be-429b-bed8-090d5fff3021/persistentvolumes/pv-i-prebound failed because of a conflict, going to retry
I1010 13:33:42.059050  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (8.209479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49030]
I1010 13:33:42.059077  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (4.534838ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:42.059254  110878 pv_controller.go:854] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:33:42.059291  110878 pv_controller.go:936] error binding volume "pv-i-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:33:42.059307  110878 pv_controller_base.go:251] could not sync claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:33:42.059477  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35308
I1010 13:33:42.059519  110878 pv_controller.go:800] volume "pv-i-prebound" entered phase "Available"
I1010 13:33:42.059821  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35308
I1010 13:33:42.059886  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 13:33:42.059899  110878 pv_controller.go:508] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound
I1010 13:33:42.059906  110878 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Available
I1010 13:33:42.059914  110878 pv_controller.go:782] updating PersistentVolume[pv-i-prebound]: phase Available already set
I1010 13:33:42.060262  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound
I1010 13:33:42.060277  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound
E1010 13:33:42.060610  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:42.060649  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I1010 13:33:42.063142  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.972991ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:42.064389  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound/status: (3.465276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45388]
I1010 13:33:42.064409  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.242288ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
E1010 13:33:42.064678  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:42.161076  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.755239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:42.260984  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.663739ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:42.361087  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.687597ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:42.460726  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.498788ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:42.560601  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.438233ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:42.660706  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.534919ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:42.761225  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.969072ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:42.861200  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.928015ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:42.961324  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.044167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.060695  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.559747ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.160966  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.706653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.261140  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.877116ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.361089  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.813117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.460709  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.463018ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.560882  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.685873ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.661454  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.934441ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.760883  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.745948ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.829364  110878 httplog.go:90] GET /api/v1/namespaces/default: (2.668393ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.831286  110878 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.460081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.832655  110878 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (989.661µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.860684  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.564487ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:43.960786  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.618876ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.061453  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.264288ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.161366  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.011675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.260944  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.66839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.361787  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.396707ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.461808  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.301252ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.561675  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.364503ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.662814  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (3.543349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.761489  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.212344ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.860920  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.689728ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:44.963435  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (4.259322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.061062  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.819226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.161648  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.395897ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.261244  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.963208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.361091  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.867017ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.461346  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.028532ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.561645  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.366227ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.661054  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.914941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.761055  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.785661ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.861098  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.908473ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:45.960713  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.485274ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.060958  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.697901ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.160703  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.5255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.261466  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.076826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.360905  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.686653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.461103  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.838968ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.561525  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.271158ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.661718  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.428529ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.762355  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.994971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.861370  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.112871ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:46.960944  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.696206ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.060740  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.585579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.160821  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.511994ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.261058  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.806344ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.361841  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.086229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.461093  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.817509ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.564585  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (3.071331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.662072  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.089968ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.761917  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.564243ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.864546  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.062762ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:47.962655  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.692598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.065541  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (6.19979ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.162004  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.790499ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.261013  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.831167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.364847  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (4.95918ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.461336  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.911386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.561089  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.783116ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.665016  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.186199ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.760870  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.587861ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.861314  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.370677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:48.961535  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.182192ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.061337  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.040683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.162582  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (3.281412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.261645  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.335802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.361316  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.920257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.461461  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.011628ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.562104  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.734448ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.662851  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (3.132103ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.761558  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.293009ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.863029  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (3.767913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:49.961485  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.148973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.061273  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.979739ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.163985  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (4.721243ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.261536  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.103966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.361246  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.048586ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.461385  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.967674ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.562030  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.713805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.661973  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.850163ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.764462  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (3.143191ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.861415  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.959925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:50.961484  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.278516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.061273  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.095026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.161347  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.00156ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.261330  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.985495ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.361599  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.194347ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.461155  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.881696ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.562194  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.766094ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.660901  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.745967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.761187  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.937661ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.860963  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.623977ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:51.962083  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.811712ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.060872  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.627435ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.161150  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.741509ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.260943  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.733121ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.361628  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.188966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.460981  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.771031ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.560942  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.693885ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.661663  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.299488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.761231  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.846434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.861368  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.111348ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:52.961019  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.786313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.061515  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.218874ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.161242  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.06789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.261635  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.352626ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.361462  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.14721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.461228  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.877666ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.561603  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.387439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.661342  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.114095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.761160  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.923495ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.829729  110878 httplog.go:90] GET /api/v1/namespaces/default: (2.354955ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.832602  110878 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.32957ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.834628  110878 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.31518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.862359  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (3.086881ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:53.960999  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.706404ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:54.011588  110878 pv_controller_base.go:426] resyncing PV controller
I1010 13:33:54.011706  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35308
I1010 13:33:54.011799  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 13:33:54.011808  110878 pv_controller.go:508] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound
I1010 13:33:54.011815  110878 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Available
I1010 13:33:54.011824  110878 pv_controller.go:782] updating PersistentVolume[pv-i-prebound]: phase Available already set
I1010 13:33:54.011855  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" with version 35305
I1010 13:33:54.011887  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:54.011928  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: )", boundByController: false
I1010 13:33:54.011943  110878 pv_controller.go:933] binding volume "pv-i-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:54.011955  110878 pv_controller.go:831] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:54.011995  110878 pv_controller.go:851] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I1010 13:33:54.015360  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.855449ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:54.015816  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound
I1010 13:33:54.015846  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound
I1010 13:33:54.015923  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 37830
I1010 13:33:54.015962  110878 pv_controller.go:864] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:54.015974  110878 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Bound
E1010 13:33:54.016178  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:54.016221  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
E1010 13:33:54.016236  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:54.016487  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 37830
I1010 13:33:54.016545  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: 9e6733bd-3a8e-43af-9c24-68f1ae8ec2aa)", boundByController: false
I1010 13:33:54.016570  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound
I1010 13:33:54.016601  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:54.016620  110878 pv_controller.go:608] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1010 13:33:54.019259  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 37832
I1010 13:33:54.019317  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: 9e6733bd-3a8e-43af-9c24-68f1ae8ec2aa)", boundByController: false
I1010 13:33:54.019333  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound
I1010 13:33:54.019351  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:33:54.019364  110878 pv_controller.go:608] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1010 13:33:54.019483  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.035412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:54.019919  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 37832
I1010 13:33:54.019955  110878 pv_controller.go:800] volume "pv-i-prebound" entered phase "Bound"
I1010 13:33:54.019971  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1010 13:33:54.019988  110878 pv_controller.go:903] volume "pv-i-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:54.020043  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (3.501364ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:54.020262  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (3.099266ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.023644  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-pv-prebound: (3.394259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49088]
I1010 13:33:54.023908  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" with version 37835
I1010 13:33:54.023947  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I1010 13:33:54.023959  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound] status: set phase Bound
I1010 13:33:54.026213  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-pv-prebound/status: (1.873134ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.026729  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" with version 37837
I1010 13:33:54.026794  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" entered phase "Bound"
I1010 13:33:54.026812  110878 pv_controller.go:959] volume "pv-i-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:54.026828  110878 pv_controller.go:960] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: 9e6733bd-3a8e-43af-9c24-68f1ae8ec2aa)", boundByController: false
I1010 13:33:54.026840  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1010 13:33:54.026875  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" with version 37837
I1010 13:33:54.026890  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1010 13:33:54.026911  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: 9e6733bd-3a8e-43af-9c24-68f1ae8ec2aa)", boundByController: false
I1010 13:33:54.026920  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: claim is already correctly bound
I1010 13:33:54.026929  110878 pv_controller.go:933] binding volume "pv-i-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:54.026940  110878 pv_controller.go:831] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:54.026957  110878 pv_controller.go:843] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:54.026966  110878 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1010 13:33:54.026974  110878 pv_controller.go:782] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I1010 13:33:54.026981  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1010 13:33:54.027000  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I1010 13:33:54.027009  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound] status: set phase Bound
I1010 13:33:54.027028  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound] status: phase Bound already set
I1010 13:33:54.027039  110878 pv_controller.go:959] volume "pv-i-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound"
I1010 13:33:54.027056  110878 pv_controller.go:960] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: 9e6733bd-3a8e-43af-9c24-68f1ae8ec2aa)", boundByController: false
I1010 13:33:54.027069  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1010 13:33:54.063444  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (4.088685ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.161202  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.908811ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.262117  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.803018ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.362292  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.727507ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.461866  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.558677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.562413  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (3.026829ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.661917  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.648342ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.763412  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (4.138574ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.860721  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.527984ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:54.960666  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.531282ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.061404  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.158457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.161943  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.749308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.261343  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.089704ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.361252  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.959149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.460945  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.598958ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.560695  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.492434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.661013  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.749317ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.761522  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (2.231485ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.812290  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound
I1010 13:33:55.812346  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound
I1010 13:33:55.812651  110878 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound" match with Node "node-1"
I1010 13:33:55.812844  110878 scheduler_binder.go:653] PersistentVolume "pv-i-prebound", Node "node-2" mismatch for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound": No matching NodeSelectorTerms
I1010 13:33:55.812952  110878 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound", node "node-1"
I1010 13:33:55.812971  110878 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound", node "node-1": all PVCs bound and nothing to do
I1010 13:33:55.813035  110878 factory.go:710] Attempting to bind pod-i-pv-prebound to node-1
I1010 13:33:55.817440  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound/binding: (3.48211ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.818070  110878 scheduler.go:730] pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 13:33:55.820950  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.230816ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.861029  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-i-pv-prebound: (1.719731ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.863106  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-i-pv-prebound: (1.458196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.864691  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.107886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.872197  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (7.012582ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.878229  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (5.544427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.879501  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" deleted
I1010 13:33:55.879559  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 37832
I1010 13:33:55.879596  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: 9e6733bd-3a8e-43af-9c24-68f1ae8ec2aa)", boundByController: false
I1010 13:33:55.879609  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound
I1010 13:33:55.879636  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound not found
I1010 13:33:55.879674  110878 pv_controller.go:577] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I1010 13:33:55.879686  110878 pv_controller.go:779] updating PersistentVolume[pv-i-prebound]: set phase Released
I1010 13:33:55.884102  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (4.053277ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:55.884394  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 38305
I1010 13:33:55.884428  110878 pv_controller.go:800] volume "pv-i-prebound" entered phase "Released"
I1010 13:33:55.884441  110878 pv_controller.go:1013] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1010 13:33:55.884468  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 38305
I1010 13:33:55.884494  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-i-prebound]: phase: Released, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound (uid: 9e6733bd-3a8e-43af-9c24-68f1ae8ec2aa)", boundByController: false
I1010 13:33:55.884519  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound
I1010 13:33:55.884547  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound not found
I1010 13:33:55.884559  110878 pv_controller.go:1013] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1010 13:33:55.888959  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (9.916869ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.889608  110878 pv_controller_base.go:216] volume "pv-i-prebound" deleted
I1010 13:33:55.889679  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-i-pv-prebound" was already processed
I1010 13:33:55.897438  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.000693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.897607  110878 volume_binding_test.go:191] Running test wait pvc prebound
I1010 13:33:55.900707  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.637442ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.904804  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.48626ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.907387  110878 httplog.go:90] POST /api/v1/persistentvolumes: (1.897256ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.908513  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-pvc-prebound", version 38311
I1010 13:33:55.908558  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1010 13:33:55.908577  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1010 13:33:55.908585  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1010 13:33:55.911395  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound", version 38312
I1010 13:33:55.911515  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1010 13:33:55.911585  110878 pv_controller.go:349] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1010 13:33:55.911642  110878 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I1010 13:33:55.911692  110878 pv_controller.go:372] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume is unbound, binding
I1010 13:33:55.911795  110878 pv_controller.go:933] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:33:55.911864  110878 pv_controller.go:831] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:33:55.911925  110878 pv_controller.go:851] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I1010 13:33:55.914687  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (6.683434ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.914862  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (6.016624ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:55.915201  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38313
I1010 13:33:55.915227  110878 pv_controller.go:800] volume "pv-w-pvc-prebound" entered phase "Available"
I1010 13:33:55.915254  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38313
I1010 13:33:55.915269  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1010 13:33:55.915293  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1010 13:33:55.915299  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1010 13:33:55.915308  110878 pv_controller.go:782] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I1010 13:33:55.915660  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.489175ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:33:55.915891  110878 pv_controller.go:854] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:33:55.915924  110878 pv_controller.go:936] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:33:55.915940  110878 pv_controller_base.go:251] could not sync claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:33:55.918873  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (2.77148ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51856]
I1010 13:33:55.920188  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
I1010 13:33:55.920221  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
E1010 13:33:55.920644  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:55.920883  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1010 13:33:55.923980  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound/status: (2.752388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
E1010 13:33:55.924499  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:55.924570  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
I1010 13:33:55.924580  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
E1010 13:33:55.925117  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:33:55.925572  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (1.817919ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52096]
I1010 13:33:55.925570  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.905891ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:55.926074  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1010 13:33:55.926175  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:33:55.929181  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (2.288211ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:33:55.930425  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.748602ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
E1010 13:33:55.930723  110878 factory.go:685] pod is already present in unschedulableQ
I1010 13:33:56.022311  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.489228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:56.122297  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.286064ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:56.222120  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.210802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:56.322566  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.777059ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:56.422245  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.93902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:56.522134  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.249846ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:56.623667  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.045331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:56.721826  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.936059ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:56.821692  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.902407ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:56.922895  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.929611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.029908  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (7.373212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.122169  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.279416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.222961  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.021248ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.321987  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.137449ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.422327  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.387681ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.523299  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.360575ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.622453  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.579432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.722521  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.614813ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.821923  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.177716ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:57.926002  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (6.111267ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.022279  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.109604ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.122572  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.592728ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.221769  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.928353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.321876  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.946551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.422082  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.291502ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.522303  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.334461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.621813  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.070261ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.727864  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (7.918384ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.824759  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (4.590042ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:58.922499  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.642359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.021613  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.811345ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.121516  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.723299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.230485  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (10.018123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.322204  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.335299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.422071  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.329555ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.522827  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.824472ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.625994  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (6.128113ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.724683  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (4.716946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.821630  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.699558ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:33:59.927519  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (6.735237ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.028518  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.067996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.128575  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (7.616014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.221610  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.885013ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.322014  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.179551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.423268  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.236998ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.522278  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.272364ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.621858  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.821594ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.721805  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.97611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.821682  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.476068ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:00.922022  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.298305ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.022098  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.278805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.121774  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.857063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.222188  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.368041ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.321718  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.016302ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.421452  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.675059ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.522174  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.309117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.621802  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.878944ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.723166  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.225996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.823288  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.420431ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:01.921656  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.785319ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.022066  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.179823ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.121986  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.05538ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.221926  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.018032ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.322178  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.232976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.422095  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.155984ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.521941  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.985863ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.622669  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.712166ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.722040  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.124866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.825479  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (5.639848ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:02.921863  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.85917ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.021505  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.518371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.122182  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.205439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.222590  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.646828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.324240  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (4.26384ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.421621  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.7558ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.522094  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.11726ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.622958  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.068946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.722027  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.149363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.822330  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.448934ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.829184  110878 httplog.go:90] GET /api/v1/namespaces/default: (1.870528ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.830812  110878 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.207354ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.832395  110878 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (978.172µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:03.923112  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.223058ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.021943  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.991529ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.121633  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.708967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.221444  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.725475ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.323832  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (4.001796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.421652  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.887771ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.521689  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.802407ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.623587  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.722045ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.722272  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.201548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.826093  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (6.381285ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:04.923442  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.207664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.026025  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (6.145791ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.125426  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (5.626862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.254454  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (32.200016ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.322952  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.271842ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.421709  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.800634ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.524261  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.863398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.622285  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.41532ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.722325  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.962842ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.822575  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.883177ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:05.922727  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.975513ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.021477  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.668257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.121620  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.594891ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.222137  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.751099ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.322038  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.9503ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.421375  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.581698ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.522924  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.095025ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.622404  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.430554ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.721654  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.814778ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.821263  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.53422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:06.922100  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.306105ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.021564  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.811371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.124198  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.91637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.221680  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.858704ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.323272  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.377216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.421687  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.629659ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.521682  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.715282ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.622990  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.077848ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.721921  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.044397ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.821642  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.820185ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:07.921633  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.770872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.022166  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.228357ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.121521  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.639827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.221692  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.829291ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.321741  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.786242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.421412  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.580547ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.522209  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.30756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.621914  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.20216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.721857  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.032665ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.821651  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.823191ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:08.922053  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.304085ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:09.011876  110878 pv_controller_base.go:426] resyncing PV controller
I1010 13:34:09.012055  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" with version 38312
I1010 13:34:09.012098  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1010 13:34:09.012113  110878 pv_controller.go:349] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1010 13:34:09.012129  110878 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1010 13:34:09.012152  110878 pv_controller.go:372] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume is unbound, binding
I1010 13:34:09.012176  110878 pv_controller.go:933] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:09.012184  110878 pv_controller.go:831] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:09.012217  110878 pv_controller.go:851] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I1010 13:34:09.012416  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38313
I1010 13:34:09.012468  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1010 13:34:09.012597  110878 pv_controller.go:496] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1010 13:34:09.012622  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1010 13:34:09.012633  110878 pv_controller.go:782] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I1010 13:34:09.020625  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (7.98565ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:09.020994  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 39635
I1010 13:34:09.021032  110878 pv_controller.go:864] updating PersistentVolume[pv-w-pvc-prebound]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:09.021044  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1010 13:34:09.021156  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
I1010 13:34:09.021179  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
I1010 13:34:09.021302  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 39635
I1010 13:34:09.021338  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound (uid: 089e6da3-dd10-4a8d-8c37-1a427ed7d00f)", boundByController: true
I1010 13:34:09.021351  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound
I1010 13:34:09.021418  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1010 13:34:09.021433  110878 pv_controller.go:621] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1010 13:34:09.021443  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
E1010 13:34:09.021591  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:34:09.021647  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1010 13:34:09.021663  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:34:09.024046  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.966363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:09.024406  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.952ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:09.026083  110878 store.go:365] GuaranteedUpdate of /d82f0006-70be-429b-bed8-090d5fff3021/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I1010 13:34:09.026336  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (3.828815ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49082]
I1010 13:34:09.026397  110878 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events/pod-w-pvc-prebound.15cc4c751cf370c2: (2.804537ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54448]
I1010 13:34:09.026605  110878 pv_controller.go:792] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:34:09.026628  110878 pv_controller.go:942] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound": failed saving the volume status: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:34:09.026643  110878 pv_controller_base.go:251] could not sync claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:34:09.027805  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (5.345905ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54444]
I1010 13:34:09.028147  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 39637
I1010 13:34:09.028181  110878 pv_controller.go:800] volume "pv-w-pvc-prebound" entered phase "Bound"
I1010 13:34:09.028203  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 39637
I1010 13:34:09.028222  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound (uid: 089e6da3-dd10-4a8d-8c37-1a427ed7d00f)", boundByController: true
I1010 13:34:09.028231  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound
I1010 13:34:09.028247  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1010 13:34:09.028258  110878 pv_controller.go:621] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1010 13:34:09.028264  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1010 13:34:09.028271  110878 pv_controller.go:782] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1010 13:34:09.122119  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.303395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:09.226141  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.823535ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:09.322037  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.291058ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:09.421125  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.408186ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:09.522167  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.344757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:09.621603  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.693892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:09.721818  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.966117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:09.823526  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.771444ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:09.922111  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.062729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.021881  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.958032ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.122266  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.027058ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.222201  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.259389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.322298  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.846657ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.422100  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.153578ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.521658  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.767141ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.622333  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.380526ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.722290  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.384174ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.816816  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
I1010 13:34:10.816855  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
E1010 13:34:10.817133  110878 factory.go:661] Error scheduling volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I1010 13:34:10.817176  110878 scheduler.go:746] Updating pod condition for volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1010 13:34:10.817190  110878 scheduler.go:627] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I1010 13:34:10.819519  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.939181ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:10.821076  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.258838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:10.921437  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.612269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.021202  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.438903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.121512  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.68345ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.221398  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.579614ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.321672  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.76834ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.421358  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.49607ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.521938  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.060016ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.621969  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.096007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.721719  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.760738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.821892  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.805059ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:11.921866  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.960794ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.021991  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.04883ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.121629  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.723488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.221288  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.473773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.321999  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.169076ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.421670  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.779894ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.521441  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.702265ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.621600  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.769362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.721466  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.769011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.821464  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.775838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:12.921534  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.497415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.021185  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.485651ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.121965  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.055388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.221859  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.970677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.321787  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.81644ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.431171  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (11.259106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.521599  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.770123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.621636  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.698077ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.721679  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.7764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.822983  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.122987ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.829188  110878 httplog.go:90] GET /api/v1/namespaces/default: (1.927616ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.831519  110878 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.853797ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.833854  110878 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.933407ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:13.921567  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.653027ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.022818  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.950168ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.121418  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.664631ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.221592  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.674958ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.321740  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.917916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.421602  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.840622ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.521571  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.698453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.622113  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.795551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.721483  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.734168ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.821355  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.605715ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:14.921561  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.812234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.021534  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.741153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.121865  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.084204ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.221573  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.73911ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.322006  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.152679ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.422151  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.182113ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.521691  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.863546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.622069  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.170115ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.722162  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.345798ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.821998  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.260642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:15.922141  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.242799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.021713  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.860145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.122157  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.214121ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.221832  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.965976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.321430  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.549866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.421714  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.907019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.521907  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.014021ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.621956  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.098233ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.722060  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.235532ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.822179  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.288699ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:16.921704  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.868792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.022233  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.326764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.121656  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.901231ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.221614  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.753673ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.321944  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.09561ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.422160  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.238868ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.521579  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.676721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.622256  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.368681ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.721658  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.901746ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.822074  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.334396ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:17.922184  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.353334ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.021947  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.102047ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.121716  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.956507ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.221964  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.033568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.321595  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.740497ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.422239  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.315557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.521580  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.72314ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.622317  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.286447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.721825  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.019658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.822044  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.065695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:18.922332  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.3089ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.022204  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.260183ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.121808  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.859609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.221905  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.04571ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.324979  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.760957ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.422154  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.345022ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.521837  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.037801ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.621972  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.176906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.721312  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.592492ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.821613  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.99337ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:19.922068  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.189357ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.022384  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.521446ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.121812  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.91038ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.221469  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.653146ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.321483  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.63643ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.422152  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.269693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.522093  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.243656ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.545053  110878 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.811209ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.546987  110878 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.431562ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.548407  110878 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.030195ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.621988  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.145342ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.721800  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.805785ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.821841  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.037023ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:20.922331  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.362859ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.021836  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.096324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.121875  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.017743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.221781  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.864479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.321866  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.761382ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.421514  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.656152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.521394  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.643568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.621992  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.111027ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.721623  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.88772ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.821486  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.790133ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:21.921301  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.552624ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.021879  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.066287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.121687  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.937498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.223189  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (3.342728ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.322181  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.271376ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.422137  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.328939ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.521507  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.697949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.621763  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.869722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.722056  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.23815ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.822033  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.28412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:22.921425  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.666276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.021664  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.771436ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.121313  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.559235ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.221324  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.50617ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.321956  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.026103ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.422499  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (2.662881ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.521780  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.839171ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.621873  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.773937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.721614  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.820299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.821656  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.945807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.829001  110878 httplog.go:90] GET /api/v1/namespaces/default: (1.61022ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.830389  110878 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.064461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.831796  110878 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.024363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:23.921697  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.853ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.012136  110878 pv_controller_base.go:426] resyncing PV controller
I1010 13:34:24.012282  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 39637
I1010 13:34:24.012345  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound (uid: 089e6da3-dd10-4a8d-8c37-1a427ed7d00f)", boundByController: true
I1010 13:34:24.012358  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound
I1010 13:34:24.012385  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1010 13:34:24.012401  110878 pv_controller.go:621] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1010 13:34:24.012411  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1010 13:34:24.012421  110878 pv_controller.go:782] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1010 13:34:24.012449  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" with version 38312
I1010 13:34:24.012465  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1010 13:34:24.012483  110878 pv_controller.go:349] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1010 13:34:24.012503  110878 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound (uid: 089e6da3-dd10-4a8d-8c37-1a427ed7d00f)", boundByController: true
I1010 13:34:24.012522  110878 pv_controller.go:392] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume already bound, finishing the binding
I1010 13:34:24.012534  110878 pv_controller.go:933] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:24.012548  110878 pv_controller.go:831] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:24.012581  110878 pv_controller.go:843] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:24.012590  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1010 13:34:24.012598  110878 pv_controller.go:782] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1010 13:34:24.012608  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1010 13:34:24.012626  110878 pv_controller.go:903] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:24.015771  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-prebound: (2.531508ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.018083  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" with version 40701
I1010 13:34:24.018119  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: bound to "pv-w-pvc-prebound"
I1010 13:34:24.018133  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound] status: set phase Bound
I1010 13:34:24.018194  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
I1010 13:34:24.018209  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound
I1010 13:34:24.018426  110878 scheduler_binder.go:653] PersistentVolume "pv-w-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound": No matching NodeSelectorTerms
I1010 13:34:24.018426  110878 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound" match with Node "node-1"
I1010 13:34:24.018509  110878 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound", node "node-1"
I1010 13:34:24.018526  110878 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I1010 13:34:24.018581  110878 factory.go:710] Attempting to bind pod-w-pvc-prebound to node-1
I1010 13:34:24.021740  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound: (1.688223ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56140]
I1010 13:34:24.022640  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pvc-prebound/binding: (2.261616ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:24.023181  110878 scheduler.go:730] pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pvc-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 13:34:24.025641  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-prebound: (3.222124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56140]
I1010 13:34:24.025927  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-prebound/status: (7.474135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.026614  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (3.051188ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:24.027049  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" with version 40706
I1010 13:34:24.027088  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" entered phase "Bound"
I1010 13:34:24.027106  110878 pv_controller.go:959] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:24.027134  110878 pv_controller.go:960] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound (uid: 089e6da3-dd10-4a8d-8c37-1a427ed7d00f)", boundByController: true
I1010 13:34:24.027166  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1010 13:34:24.027209  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" with version 40706
I1010 13:34:24.027225  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1010 13:34:24.027241  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: volume "pv-w-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound (uid: 089e6da3-dd10-4a8d-8c37-1a427ed7d00f)", boundByController: true
I1010 13:34:24.027253  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: claim is already correctly bound
I1010 13:34:24.027264  110878 pv_controller.go:933] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:24.027275  110878 pv_controller.go:831] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:24.027296  110878 pv_controller.go:843] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:24.027307  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1010 13:34:24.027316  110878 pv_controller.go:782] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1010 13:34:24.027326  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1010 13:34:24.027348  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound]: already bound to "pv-w-pvc-prebound"
I1010 13:34:24.027372  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound] status: set phase Bound
I1010 13:34:24.027394  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound] status: phase Bound already set
I1010 13:34:24.027408  110878 pv_controller.go:959] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound"
I1010 13:34:24.027430  110878 pv_controller.go:960] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound (uid: 089e6da3-dd10-4a8d-8c37-1a427ed7d00f)", boundByController: true
I1010 13:34:24.027447  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1010 13:34:24.027506  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.06556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56140]
I1010 13:34:24.033833  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (5.840523ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.038001  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" deleted
I1010 13:34:24.037997  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (3.769773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.038053  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 39637
I1010 13:34:24.038085  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound (uid: 089e6da3-dd10-4a8d-8c37-1a427ed7d00f)", boundByController: true
I1010 13:34:24.038096  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound
I1010 13:34:24.039109  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-prebound: (860.054µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.039872  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound not found
I1010 13:34:24.039911  110878 pv_controller.go:577] volume "pv-w-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1010 13:34:24.039924  110878 pv_controller.go:779] updating PersistentVolume[pv-w-pvc-prebound]: set phase Released
I1010 13:34:24.044486  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (3.29498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.044608  110878 store.go:231] deletion of /d82f0006-70be-429b-bed8-090d5fff3021/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I1010 13:34:24.044994  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40717
I1010 13:34:24.045024  110878 pv_controller.go:800] volume "pv-w-pvc-prebound" entered phase "Released"
I1010 13:34:24.045033  110878 pv_controller.go:1013] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I1010 13:34:24.046068  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40717
I1010 13:34:24.046114  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Released, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound (uid: 089e6da3-dd10-4a8d-8c37-1a427ed7d00f)", boundByController: true
I1010 13:34:24.046129  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound
I1010 13:34:24.046151  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound not found
I1010 13:34:24.046158  110878 pv_controller.go:1013] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I1010 13:34:24.048967  110878 pv_controller_base.go:216] volume "pv-w-pvc-prebound" deleted
I1010 13:34:24.049011  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound" was already processed
I1010 13:34:24.049607  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (10.722555ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:24.061595  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (11.231447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:24.061900  110878 volume_binding_test.go:191] Running test wait pv prebound
I1010 13:34:24.063732  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.54069ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:24.065970  110878 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.771393ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:24.069864  110878 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-prebound", version 40733
I1010 13:34:24.069909  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: )", boundByController: false
I1010 13:34:24.069917  110878 pv_controller.go:508] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound
I1010 13:34:24.069924  110878 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Available
I1010 13:34:24.069945  110878 httplog.go:90] POST /api/v1/persistentvolumes: (3.307006ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:24.072355  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (2.003734ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.072800  110878 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound", version 40735
I1010 13:34:24.072952  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:34:24.073086  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Pending, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: )", boundByController: false
I1010 13:34:24.073190  110878 pv_controller.go:933] binding volume "pv-w-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.073269  110878 pv_controller.go:831] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.073356  110878 pv_controller.go:851] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1010 13:34:24.075519  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (5.221106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:24.075737  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 40736
I1010 13:34:24.075781  110878 pv_controller.go:800] volume "pv-w-prebound" entered phase "Available"
I1010 13:34:24.075808  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 40736
I1010 13:34:24.075830  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: )", boundByController: false
I1010 13:34:24.075837  110878 pv_controller.go:508] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound
I1010 13:34:24.075844  110878 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Available
I1010 13:34:24.075853  110878 pv_controller.go:782] updating PersistentVolume[pv-w-prebound]: phase Available already set
I1010 13:34:24.076116  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (3.261936ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.076395  110878 scheduling_queue.go:883] About to try and schedule pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pv-prebound
I1010 13:34:24.076419  110878 scheduler.go:598] Attempting to schedule pod: volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pv-prebound
I1010 13:34:24.076599  110878 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pv-prebound" on node "node-1"
I1010 13:34:24.076734  110878 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pv-prebound", PVC "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" on node "node-2"
I1010 13:34:24.077039  110878 scheduler_binder.go:725] storage class "wait-hjpn" of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" does not support dynamic provisioning
I1010 13:34:24.077235  110878 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pv-prebound", node "node-1"
I1010 13:34:24.077376  110878 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-prebound", version 40736
I1010 13:34:24.077588  110878 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pv-prebound", node "node-1"
I1010 13:34:24.077703  110878 scheduler_binder.go:404] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1010 13:34:24.078731  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (4.850252ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.079094  110878 pv_controller.go:854] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:34:24.079131  110878 pv_controller.go:936] error binding volume "pv-w-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:34:24.079161  110878 pv_controller_base.go:251] could not sync claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1010 13:34:24.080152  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.931375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I1010 13:34:24.080588  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 40740
I1010 13:34:24.080799  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: b8c12013-301c-4520-aa3e-951b51f9bb4b)", boundByController: false
I1010 13:34:24.080817  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound
I1010 13:34:24.080835  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:34:24.080849  110878 pv_controller.go:608] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1010 13:34:24.080881  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" with version 40735
I1010 13:34:24.080903  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:34:24.080929  110878 pv_controller.go:330] synchronizing unbound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: b8c12013-301c-4520-aa3e-951b51f9bb4b)", boundByController: false
I1010 13:34:24.080940  110878 pv_controller.go:933] binding volume "pv-w-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.080952  110878 pv_controller.go:831] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.080969  110878 pv_controller.go:843] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.080978  110878 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1010 13:34:24.081254  110878 scheduler_binder.go:410] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.083406  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.156669ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.084179  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 40741
I1010 13:34:24.084210  110878 pv_controller.go:800] volume "pv-w-prebound" entered phase "Bound"
I1010 13:34:24.084223  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1010 13:34:24.084240  110878 pv_controller.go:903] volume "pv-w-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.084787  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 40741
I1010 13:34:24.085135  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: b8c12013-301c-4520-aa3e-951b51f9bb4b)", boundByController: false
I1010 13:34:24.085283  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound
I1010 13:34:24.085401  110878 pv_controller.go:557] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1010 13:34:24.085499  110878 pv_controller.go:608] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1010 13:34:24.086558  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-pv-prebound: (2.044938ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.086919  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" with version 40744
I1010 13:34:24.086944  110878 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I1010 13:34:24.086956  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound] status: set phase Bound
I1010 13:34:24.089103  110878 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-pv-prebound/status: (1.644066ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.089692  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" with version 40745
I1010 13:34:24.089728  110878 pv_controller.go:744] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" entered phase "Bound"
I1010 13:34:24.089763  110878 pv_controller.go:959] volume "pv-w-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.089786  110878 pv_controller.go:960] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: b8c12013-301c-4520-aa3e-951b51f9bb4b)", boundByController: false
I1010 13:34:24.089802  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1010 13:34:24.089831  110878 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" with version 40745
I1010 13:34:24.089845  110878 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1010 13:34:24.089861  110878 pv_controller.go:451] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: b8c12013-301c-4520-aa3e-951b51f9bb4b)", boundByController: false
I1010 13:34:24.089879  110878 pv_controller.go:468] synchronizing bound PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: claim is already correctly bound
I1010 13:34:24.089889  110878 pv_controller.go:933] binding volume "pv-w-prebound" to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.089899  110878 pv_controller.go:831] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.089917  110878 pv_controller.go:843] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.089925  110878 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1010 13:34:24.089934  110878 pv_controller.go:782] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I1010 13:34:24.089942  110878 pv_controller.go:871] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1010 13:34:24.089960  110878 pv_controller.go:918] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I1010 13:34:24.089968  110878 pv_controller.go:685] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound] status: set phase Bound
I1010 13:34:24.089987  110878 pv_controller.go:730] updating PersistentVolumeClaim[volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound] status: phase Bound already set
I1010 13:34:24.089998  110878 pv_controller.go:959] volume "pv-w-prebound" bound to claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound"
I1010 13:34:24.090017  110878 pv_controller.go:960] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: b8c12013-301c-4520-aa3e-951b51f9bb4b)", boundByController: false
I1010 13:34:24.090032  110878 pv_controller.go:961] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1010 13:34:24.179259  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (2.279055ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.279017  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (1.92092ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.379029  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (1.626236ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.478362  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (1.455338ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.578664  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (1.71302ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.678719  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (1.679756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.778513  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (1.589095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.816558  110878 cache.go:669] Couldn't expire cache for pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pv-prebound. Binding is still in progress.
I1010 13:34:24.878233  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (1.312507ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:24.979534  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (2.488287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.078437  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (1.454446ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.081443  110878 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pv-prebound" are bound
I1010 13:34:25.081485  110878 factory.go:710] Attempting to bind pod-w-pv-prebound to node-1
I1010 13:34:25.084244  110878 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound/binding: (2.51051ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.085549  110878 scheduler.go:730] pod volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-w-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I1010 13:34:25.087530  110878 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/events: (1.686674ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.178783  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods/pod-w-pv-prebound: (1.737948ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.180318  110878 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims/pvc-w-pv-prebound: (1.063175ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.182242  110878 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (1.605689ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.189268  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (6.657224ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.194240  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (4.185049ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.194583  110878 pv_controller_base.go:265] claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" deleted
I1010 13:34:25.194621  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 40741
I1010 13:34:25.194653  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: b8c12013-301c-4520-aa3e-951b51f9bb4b)", boundByController: false
I1010 13:34:25.194663  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound
I1010 13:34:25.194684  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound not found
I1010 13:34:25.194698  110878 pv_controller.go:577] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I1010 13:34:25.194708  110878 pv_controller.go:779] updating PersistentVolume[pv-w-prebound]: set phase Released
I1010 13:34:25.196696  110878 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.72506ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52092]
I1010 13:34:25.196985  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 41027
I1010 13:34:25.197013  110878 pv_controller.go:800] volume "pv-w-prebound" entered phase "Released"
I1010 13:34:25.197095  110878 pv_controller.go:1013] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1010 13:34:25.197122  110878 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 41027
I1010 13:34:25.197148  110878 pv_controller.go:491] synchronizing PersistentVolume[pv-w-prebound]: phase: Released, bound to: "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound (uid: b8c12013-301c-4520-aa3e-951b51f9bb4b)", boundByController: false
I1010 13:34:25.197160  110878 pv_controller.go:516] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound
I1010 13:34:25.197177  110878 pv_controller.go:549] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound not found
I1010 13:34:25.197195  110878 pv_controller.go:1013] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1010 13:34:25.198285  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (3.561236ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.198527  110878 pv_controller_base.go:216] volume "pv-w-prebound" deleted
I1010 13:34:25.198556  110878 pv_controller_base.go:403] deletion of claim "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-pv-prebound" was already processed
I1010 13:34:25.204611  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.02036ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.204872  110878 volume_binding_test.go:920] test cluster "volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4" start to tear down
I1010 13:34:25.206241  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pods: (1.160347ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.207578  110878 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/persistentvolumeclaims: (1.080826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.209178  110878 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.297457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.210487  110878 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (850.352µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.211084  110878 pv_controller_base.go:305] Shutting down persistent volume controller
I1010 13:34:25.211153  110878 pv_controller_base.go:416] claim worker queue shutting down
I1010 13:34:25.211200  110878 pv_controller_base.go:359] volume worker queue shutting down
I1010 13:34:25.211647  110878 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=32397&timeout=7m29s&timeoutSeconds=449&watch=true: (1m1.387788525s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44504]
I1010 13:34:25.211738  110878 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=32398&timeout=9m42s&timeoutSeconds=582&watch=true: (1m1.402295108s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44094]
I1010 13:34:25.211832  110878 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=32397&timeout=9m19s&timeoutSeconds=559&watch=true: (1m1.294700976s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44522]
I1010 13:34:25.212152  110878 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32396&timeout=9m45s&timeoutSeconds=585&watch=true: (1m1.29660776s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44512]
I1010 13:34:25.212011  110878 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=32403&timeout=9m9s&timeoutSeconds=549&watch=true: (1m1.401555847s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44494]
I1010 13:34:25.212079  110878 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32404&timeout=9m2s&timeoutSeconds=542&watch=true: (1m1.403002577s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44480]
I1010 13:34:25.212355  110878 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32396&timeout=7m40s&timeoutSeconds=460&watch=true: (1m1.296419266s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44520]
I1010 13:34:25.212501  110878 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32396&timeout=7m2s&timeoutSeconds=422&watch=true: (1m1.389853046s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44498]
I1010 13:34:25.212591  110878 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32396&timeout=7m28s&timeoutSeconds=448&watch=true: (1m1.400188362s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44486]
I1010 13:34:25.212660  110878 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32404&timeout=8m20s&timeoutSeconds=500&watch=true: (1m1.295804959s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44518]
I1010 13:34:25.212701  110878 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32396&timeout=9m16s&timeoutSeconds=556&watch=true: (1m1.399722021s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44496]
I1010 13:34:25.212893  110878 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=32404&timeout=9m32s&timeoutSeconds=572&watch=true: (1m1.389827223s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44500]
I1010 13:34:25.212971  110878 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=32404&timeout=8m12s&timeoutSeconds=492&watch=true: (1m1.407916038s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44288]
I1010 13:34:25.213065  110878 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32396&timeout=8m45s&timeoutSeconds=525&watch=true: (1m1.298161504s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44514]
I1010 13:34:25.213188  110878 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=32398&timeout=9m43s&timeoutSeconds=583&watch=true: (1m1.39044164s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44502]
I1010 13:34:25.213329  110878 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=32404&timeout=6m22s&timeoutSeconds=382&watch=true: (1m1.391624705s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44490]
I1010 13:34:25.220080  110878 httplog.go:90] DELETE /api/v1/nodes: (8.211138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.220293  110878 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1010 13:34:25.221736  110878 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.101757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
I1010 13:34:25.224071  110878 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.868728ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56142]
--- FAIL: TestVolumeBinding (65.09s)
    volume_binding_test.go:1131: PVC volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pvc-w-prebound phase not Bound, got Pending

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191010-132450.xml

Find volume-scheduling-181e3b86-6907-4803-8277-d781b074a3e4/pod-i-pvc-prebound mentions in log files | View test history on testgrid


Show 2898 Passed Tests