This job view page is being replaced by Spyglass soon. Check out the new job view.
PRkkmsft: Azure : filter disks with ToBeDetached flag
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-11-09 00:31
Elapsed25m27s
Revisionddf4d0e45608ff99c33b00a1050810d2c6cbc694
Refs 84958

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeBinding 1m7s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeBinding$
=== RUN   TestVolumeBinding
W1109 00:53:08.946704  112068 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1109 00:53:08.946841  112068 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I1109 00:53:08.946917  112068 master.go:309] Node port range unspecified. Defaulting to 30000-32767.
I1109 00:53:08.946969  112068 master.go:265] Using reconciler: 
I1109 00:53:08.948987  112068 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.949489  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.949594  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.951030  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.951066  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.953913  112068 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1109 00:53:08.954000  112068 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.954040  112068 reflector.go:188] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1109 00:53:08.954360  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.954392  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.955450  112068 store.go:1342] Monitoring events count at <storage-prefix>//events
I1109 00:53:08.955542  112068 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.955619  112068 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1109 00:53:08.955753  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.955787  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.956013  112068 watch_cache.go:409] Replace watchCache (rev: 31102) 
I1109 00:53:08.958073  112068 watch_cache.go:409] Replace watchCache (rev: 31102) 
I1109 00:53:08.958484  112068 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1109 00:53:08.958604  112068 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.958920  112068 reflector.go:188] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1109 00:53:08.960432  112068 watch_cache.go:409] Replace watchCache (rev: 31102) 
I1109 00:53:08.960458  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.960504  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.962064  112068 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1109 00:53:08.962298  112068 reflector.go:188] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1109 00:53:08.962394  112068 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.962580  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.962608  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.965889  112068 watch_cache.go:409] Replace watchCache (rev: 31103) 
I1109 00:53:08.967302  112068 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1109 00:53:08.967428  112068 reflector.go:188] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1109 00:53:08.967749  112068 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.968473  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.968539  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.970654  112068 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1109 00:53:08.970932  112068 reflector.go:188] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1109 00:53:08.971417  112068 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.971622  112068 watch_cache.go:409] Replace watchCache (rev: 31104) 
I1109 00:53:08.971745  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.971778  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.972322  112068 watch_cache.go:409] Replace watchCache (rev: 31104) 
I1109 00:53:08.973192  112068 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1109 00:53:08.973361  112068 reflector.go:188] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1109 00:53:08.973597  112068 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.973831  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.973861  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.974626  112068 watch_cache.go:409] Replace watchCache (rev: 31104) 
I1109 00:53:08.975060  112068 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1109 00:53:08.975262  112068 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.975484  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.975558  112068 reflector.go:188] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1109 00:53:08.975616  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.976355  112068 watch_cache.go:409] Replace watchCache (rev: 31104) 
I1109 00:53:08.980695  112068 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1109 00:53:08.980911  112068 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.981157  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.981187  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.981367  112068 reflector.go:188] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1109 00:53:08.984398  112068 watch_cache.go:409] Replace watchCache (rev: 31107) 
I1109 00:53:08.984848  112068 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1109 00:53:08.985146  112068 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.985313  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.985852  112068 reflector.go:188] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1109 00:53:08.985340  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.987402  112068 watch_cache.go:409] Replace watchCache (rev: 31108) 
I1109 00:53:08.988966  112068 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1109 00:53:08.989115  112068 reflector.go:188] Listing and watching *core.Node from storage/cacher.go:/minions
I1109 00:53:08.989320  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.989545  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.989577  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.990205  112068 watch_cache.go:409] Replace watchCache (rev: 31108) 
I1109 00:53:08.990541  112068 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1109 00:53:08.990813  112068 reflector.go:188] Listing and watching *core.Pod from storage/cacher.go:/pods
I1109 00:53:08.991160  112068 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.991439  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.991474  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.996868  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:08.997069  112068 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1109 00:53:08.997394  112068 reflector.go:188] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1109 00:53:08.997569  112068 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:08.997854  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:08.997886  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:08.999093  112068 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1109 00:53:08.999320  112068 reflector.go:188] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1109 00:53:08.999461  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:08.999454  112068 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.000173  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.000333  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.001751  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.001776  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.001939  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.002888  112068 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.003003  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.003026  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.003858  112068 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1109 00:53:09.003891  112068 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1109 00:53:09.004005  112068 reflector.go:188] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1109 00:53:09.004439  112068 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.004736  112068 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.004893  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.005661  112068 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.006277  112068 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.007019  112068 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.007692  112068 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.008098  112068 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.008288  112068 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.008485  112068 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.008955  112068 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.009579  112068 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.009752  112068 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.010411  112068 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.010669  112068 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.011125  112068 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.011393  112068 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.012031  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.012188  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.012375  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.012544  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.012779  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.012980  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.013187  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.014009  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.014252  112068 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.015042  112068 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.015829  112068 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.016114  112068 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.016431  112068 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.017155  112068 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.017426  112068 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.018380  112068 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.019006  112068 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.019748  112068 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.020534  112068 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.020775  112068 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.020860  112068 master.go:493] Skipping disabled API group "auditregistration.k8s.io".
I1109 00:53:09.020877  112068 master.go:504] Enabling API group "authentication.k8s.io".
I1109 00:53:09.020889  112068 master.go:504] Enabling API group "authorization.k8s.io".
I1109 00:53:09.021072  112068 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.021229  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.021258  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.022777  112068 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1109 00:53:09.023003  112068 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.023263  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.023296  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.023461  112068 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1109 00:53:09.024292  112068 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1109 00:53:09.024365  112068 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1109 00:53:09.024451  112068 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.024612  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.024641  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.024843  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.025324  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.026818  112068 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1109 00:53:09.026846  112068 master.go:504] Enabling API group "autoscaling".
I1109 00:53:09.026914  112068 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1109 00:53:09.027196  112068 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.027431  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.027454  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.028450  112068 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1109 00:53:09.028674  112068 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.028848  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.028879  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.028978  112068 reflector.go:188] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1109 00:53:09.029917  112068 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1109 00:53:09.029955  112068 master.go:504] Enabling API group "batch".
I1109 00:53:09.030035  112068 reflector.go:188] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1109 00:53:09.030164  112068 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.030291  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.030315  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.031147  112068 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1109 00:53:09.031185  112068 master.go:504] Enabling API group "certificates.k8s.io".
I1109 00:53:09.031338  112068 reflector.go:188] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1109 00:53:09.031405  112068 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.031549  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.031577  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.032673  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.032694  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.032944  112068 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1109 00:53:09.032953  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.032972  112068 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1109 00:53:09.033257  112068 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.033367  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.033383  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.033940  112068 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1109 00:53:09.034074  112068 master.go:504] Enabling API group "coordination.k8s.io".
I1109 00:53:09.035041  112068 master.go:493] Skipping disabled API group "discovery.k8s.io".
I1109 00:53:09.034089  112068 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1109 00:53:09.035536  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.035807  112068 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.036411  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.036951  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.039827  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.041646  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.045026  112068 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1109 00:53:09.045065  112068 master.go:504] Enabling API group "extensions".
I1109 00:53:09.045101  112068 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1109 00:53:09.045310  112068 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.045517  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.045554  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.046580  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.046825  112068 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1109 00:53:09.046922  112068 reflector.go:188] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1109 00:53:09.047141  112068 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.047487  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.047546  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.048421  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.048846  112068 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1109 00:53:09.048907  112068 master.go:504] Enabling API group "networking.k8s.io".
I1109 00:53:09.048919  112068 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1109 00:53:09.049016  112068 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.049282  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.049331  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.049996  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.050620  112068 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1109 00:53:09.050644  112068 master.go:504] Enabling API group "node.k8s.io".
I1109 00:53:09.050747  112068 reflector.go:188] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1109 00:53:09.050976  112068 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.051174  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.051260  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.051557  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.052646  112068 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1109 00:53:09.052730  112068 reflector.go:188] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1109 00:53:09.052921  112068 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.053060  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.053105  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.054002  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.054536  112068 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1109 00:53:09.054614  112068 master.go:504] Enabling API group "policy".
I1109 00:53:09.054641  112068 reflector.go:188] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1109 00:53:09.054719  112068 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.054972  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.055026  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.055450  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.056503  112068 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1109 00:53:09.056583  112068 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1109 00:53:09.056732  112068 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.056885  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.056909  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.057803  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.057888  112068 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1109 00:53:09.057939  112068 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.058112  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.058141  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.058164  112068 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1109 00:53:09.058920  112068 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1109 00:53:09.058965  112068 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1109 00:53:09.059082  112068 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.059195  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.059284  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.059465  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.060108  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.060423  112068 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1109 00:53:09.060524  112068 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1109 00:53:09.060525  112068 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.060837  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.060866  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.061668  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.062144  112068 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1109 00:53:09.062274  112068 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1109 00:53:09.062391  112068 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.062497  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.062516  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.063523  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.063895  112068 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1109 00:53:09.063952  112068 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.064041  112068 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1109 00:53:09.064148  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.064173  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.064787  112068 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1109 00:53:09.064896  112068 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1109 00:53:09.064988  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.064985  112068 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.065136  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.065173  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.066098  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.066885  112068 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1109 00:53:09.066924  112068 master.go:504] Enabling API group "rbac.authorization.k8s.io".
I1109 00:53:09.066966  112068 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1109 00:53:09.067957  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.069487  112068 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.069626  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.069654  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.070352  112068 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1109 00:53:09.070433  112068 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1109 00:53:09.070577  112068 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.070704  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.070731  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.071351  112068 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1109 00:53:09.071379  112068 master.go:504] Enabling API group "scheduling.k8s.io".
I1109 00:53:09.071524  112068 master.go:493] Skipping disabled API group "settings.k8s.io".
I1109 00:53:09.071567  112068 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1109 00:53:09.071633  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.071770  112068 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.071929  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.071956  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.072557  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.072868  112068 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1109 00:53:09.072947  112068 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1109 00:53:09.073088  112068 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.073223  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.073248  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.074596  112068 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1109 00:53:09.074665  112068 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1109 00:53:09.074720  112068 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.074851  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.074879  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.075374  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.075871  112068 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1109 00:53:09.075921  112068 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.076088  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.076100  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.076294  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.076082  112068 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1109 00:53:09.077444  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.077628  112068 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1109 00:53:09.077687  112068 reflector.go:188] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1109 00:53:09.078667  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.078688  112068 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.078824  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.078856  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.079678  112068 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1109 00:53:09.079764  112068 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1109 00:53:09.079911  112068 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.080064  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.080085  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.081030  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.081369  112068 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1109 00:53:09.081431  112068 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.081531  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.081558  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.081592  112068 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1109 00:53:09.082144  112068 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1109 00:53:09.082203  112068 master.go:504] Enabling API group "storage.k8s.io".
I1109 00:53:09.082274  112068 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1109 00:53:09.082737  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.082866  112068 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.083074  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.083093  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.083762  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.084728  112068 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1109 00:53:09.084813  112068 reflector.go:188] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1109 00:53:09.084962  112068 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.085192  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.085249  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.085945  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.086107  112068 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1109 00:53:09.086333  112068 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.086447  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.086467  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.086556  112068 reflector.go:188] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1109 00:53:09.087650  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.088806  112068 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1109 00:53:09.088868  112068 reflector.go:188] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1109 00:53:09.089226  112068 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.089403  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.089435  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.092714  112068 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1109 00:53:09.093119  112068 watch_cache.go:409] Replace watchCache (rev: 31109) 
I1109 00:53:09.093252  112068 reflector.go:188] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1109 00:53:09.093408  112068 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.094287  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.095037  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.094509  112068 watch_cache.go:409] Replace watchCache (rev: 31110) 
I1109 00:53:09.096490  112068 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1109 00:53:09.096520  112068 master.go:504] Enabling API group "apps".
I1109 00:53:09.096644  112068 reflector.go:188] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1109 00:53:09.096665  112068 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.096870  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.096930  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.097774  112068 watch_cache.go:409] Replace watchCache (rev: 31111) 
I1109 00:53:09.100805  112068 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1109 00:53:09.100861  112068 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1109 00:53:09.100872  112068 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.101104  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.101128  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.102561  112068 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1109 00:53:09.102630  112068 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1109 00:53:09.103674  112068 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.105582  112068 watch_cache.go:409] Replace watchCache (rev: 31112) 
I1109 00:53:09.106327  112068 watch_cache.go:409] Replace watchCache (rev: 31112) 
I1109 00:53:09.113353  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.113442  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.114706  112068 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1109 00:53:09.114970  112068 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.115246  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.115345  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.115609  112068 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1109 00:53:09.116986  112068 watch_cache.go:409] Replace watchCache (rev: 31112) 
I1109 00:53:09.117228  112068 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1109 00:53:09.117260  112068 master.go:504] Enabling API group "admissionregistration.k8s.io".
I1109 00:53:09.117359  112068 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1109 00:53:09.117350  112068 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.117795  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.117845  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.118573  112068 watch_cache.go:409] Replace watchCache (rev: 31112) 
I1109 00:53:09.129874  112068 store.go:1342] Monitoring events count at <storage-prefix>//events
I1109 00:53:09.129916  112068 master.go:504] Enabling API group "events.k8s.io".
I1109 00:53:09.130009  112068 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1109 00:53:09.130361  112068 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.130638  112068 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.130958  112068 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.131118  112068 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.131297  112068 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.131487  112068 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.131770  112068 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.131806  112068 watch_cache.go:409] Replace watchCache (rev: 31112) 
I1109 00:53:09.131920  112068 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.132064  112068 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.132183  112068 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.133272  112068 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.133641  112068 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.134676  112068 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.135058  112068 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.136009  112068 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.136582  112068 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.137622  112068 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.137927  112068 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.138864  112068 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.139286  112068 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:53:09.139351  112068 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1109 00:53:09.140392  112068 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.140574  112068 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.140997  112068 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.142007  112068 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.142874  112068 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.143941  112068 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.144393  112068 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.145620  112068 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.146578  112068 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.146878  112068 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.147781  112068 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:53:09.147855  112068 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1109 00:53:09.149311  112068 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.149703  112068 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.150465  112068 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.151263  112068 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.153753  112068 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.154684  112068 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.155852  112068 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.162325  112068 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.163460  112068 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.164661  112068 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.166038  112068 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:53:09.166172  112068 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1109 00:53:09.167179  112068 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.167919  112068 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:53:09.167987  112068 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1109 00:53:09.168842  112068 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.169515  112068 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.170326  112068 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.170619  112068 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.171492  112068 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.172044  112068 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.172651  112068 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.173383  112068 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:53:09.173554  112068 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1109 00:53:09.174621  112068 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.175621  112068 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.176052  112068 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.177109  112068 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.177448  112068 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.177865  112068 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.178894  112068 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.179235  112068 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.179598  112068 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.180429  112068 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.180902  112068 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.181267  112068 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:53:09.181349  112068 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1109 00:53:09.181361  112068 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1109 00:53:09.182295  112068 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.183241  112068 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.184188  112068 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.184851  112068 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:53:09.185997  112068 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cc08644e-00cc-4109-8d71-8e808dc3f283", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:53:09.190878  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:53:09.191200  112068 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1109 00:53:09.191229  112068 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I1109 00:53:09.191531  112068 reflector.go:153] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1109 00:53:09.191547  112068 reflector.go:188] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1109 00:53:09.193008  112068 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0: (860.239µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34862]
I1109 00:53:09.193127  112068 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.953141ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:09.194038  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.194077  112068 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1109 00:53:09.194090  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.194101  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.194121  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.194150  112068 httplog.go:90] GET /healthz: (320.641µs) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:09.195050  112068 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=31104 labels= fields= timeout=5m47s
I1109 00:53:09.197358  112068 httplog.go:90] GET /api/v1/services: (1.430515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:09.206592  112068 httplog.go:90] GET /api/v1/services: (2.029787ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:09.210716  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.210754  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.210765  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.210774  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.210823  112068 httplog.go:90] GET /healthz: (220.791µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:09.213929  112068 httplog.go:90] GET /api/v1/services: (1.739445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:09.214034  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.624299ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34866]
I1109 00:53:09.215479  112068 httplog.go:90] GET /api/v1/services: (1.268297ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.216928  112068 httplog.go:90] POST /api/v1/namespaces: (2.43105ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34866]
I1109 00:53:09.219445  112068 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.011963ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.221618  112068 httplog.go:90] POST /api/v1/namespaces: (1.799023ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.222851  112068 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (912.081µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.225034  112068 httplog.go:90] POST /api/v1/namespaces: (1.816833ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.291463  112068 shared_informer.go:227] caches populated
I1109 00:53:09.291500  112068 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I1109 00:53:09.295897  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.295954  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.295972  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.295986  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.296032  112068 httplog.go:90] GET /healthz: (445.966µs) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:09.311782  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.311829  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.311844  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.311854  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.311891  112068 httplog.go:90] GET /healthz: (259.918µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.395788  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.395838  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.395854  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.395863  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.395919  112068 httplog.go:90] GET /healthz: (286.722µs) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:09.411833  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.411877  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.411889  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.411905  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.411943  112068 httplog.go:90] GET /healthz: (313.505µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.495728  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.495757  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.495768  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.495778  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.495808  112068 httplog.go:90] GET /healthz: (253.782µs) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:09.511844  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.511877  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.511888  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.511896  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.511952  112068 httplog.go:90] GET /healthz: (260.827µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.595729  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.595761  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.595771  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.595780  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.595809  112068 httplog.go:90] GET /healthz: (238.949µs) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:09.611668  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.611703  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.611714  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.611722  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.611763  112068 httplog.go:90] GET /healthz: (260.487µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.695730  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.695763  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.695774  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.695797  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.695879  112068 httplog.go:90] GET /healthz: (300.203µs) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:09.712834  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.712872  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.712883  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.712892  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.712925  112068 httplog.go:90] GET /healthz: (245.412µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.795698  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.795741  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.795753  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.795762  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.795794  112068 httplog.go:90] GET /healthz: (249.86µs) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:09.811723  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.811757  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.811774  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.811783  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.811812  112068 httplog.go:90] GET /healthz: (214.162µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.895740  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.895774  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.895786  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.895796  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.895862  112068 httplog.go:90] GET /healthz: (264.65µs) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:09.911674  112068 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:53:09.911710  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.911723  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.911732  112068 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.911778  112068 httplog.go:90] GET /healthz: (251.553µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:09.946606  112068 client.go:361] parsed scheme: "endpoint"
I1109 00:53:09.946714  112068 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:53:09.996781  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:09.996811  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:09.996822  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:09.996887  112068 httplog.go:90] GET /healthz: (1.385775ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:10.012773  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.012825  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:10.012833  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.012886  112068 httplog.go:90] GET /healthz: (1.371673ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.096565  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.096594  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:10.096605  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.096681  112068 httplog.go:90] GET /healthz: (1.157406ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:10.112656  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.112682  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:10.112692  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.112726  112068 httplog.go:90] GET /healthz: (1.204171ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.197234  112068 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical: (6.570721ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.197651  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.92673ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.198268  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.198306  112068 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:53:10.198322  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.198362  112068 httplog.go:90] GET /healthz: (2.527942ms) 0 [Go-http-client/1.1 127.0.0.1:34988]
I1109 00:53:10.201759  112068 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (3.320163ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.202779  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.29215ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34988]
I1109 00:53:10.202983  112068 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I1109 00:53:10.205308  112068 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical: (2.1498ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.207908  112068 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (1.760151ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.208086  112068 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I1109 00:53:10.208110  112068 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I1109 00:53:10.208591  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (3.449429ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34988]
I1109 00:53:10.210116  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.002646ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.211475  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (937.922µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.212698  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.212719  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.212752  112068 httplog.go:90] GET /healthz: (1.276714ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.212854  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.056745ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.214120  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (931.535µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.215417  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (901.347µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.216716  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (969.82µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.217888  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (850.552µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.222511  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.827962ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.222897  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1109 00:53:10.224653  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.400203ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.227416  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.202994ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.227736  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1109 00:53:10.228785  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (888.989µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.231342  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.185681ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.231551  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1109 00:53:10.232685  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (984.839µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.235193  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.132786ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.235676  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1109 00:53:10.237099  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.062884ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.240978  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.283091ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.241383  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1109 00:53:10.242646  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.065977ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.245416  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.194578ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.245635  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1109 00:53:10.247109  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.146935ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.250450  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.921968ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.250828  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1109 00:53:10.251866  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (849.934µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.254077  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.773174ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.254485  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1109 00:53:10.255888  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.168005ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.261308  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.720376ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.261850  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1109 00:53:10.263584  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.495214ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.266003  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.937877ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.266401  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1109 00:53:10.268280  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.28857ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.270493  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.657502ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.270772  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1109 00:53:10.272748  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.743622ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.275734  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.379898ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.276103  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1109 00:53:10.277366  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (957.147µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.280011  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.15124ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.280233  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1109 00:53:10.284494  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (3.813059ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.287193  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.102214ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.287455  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1109 00:53:10.288649  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (967.006µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.291276  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.943438ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.291460  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1109 00:53:10.292763  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.089895ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.295610  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.429027ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.296061  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1109 00:53:10.298587  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.298611  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.298645  112068 httplog.go:90] GET /healthz: (3.279448ms) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:10.299327  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (3.049983ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.302772  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.916754ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.302971  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1109 00:53:10.304292  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.09534ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.306617  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.903449ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.307005  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1109 00:53:10.308470  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.177017ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.310435  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.590525ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.310797  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1109 00:53:10.312781  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.312807  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.312878  112068 httplog.go:90] GET /healthz: (885.845µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.313336  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.328263ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.315765  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.952554ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.316094  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1109 00:53:10.317480  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.146273ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.320550  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.619277ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.320781  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1109 00:53:10.323384  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (2.401489ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.325632  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.752289ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.325826  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1109 00:53:10.329063  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (2.992135ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.332053  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.386419ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.332522  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1109 00:53:10.333938  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.101791ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.336744  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.111532ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.337111  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1109 00:53:10.338476  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.086144ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.352057  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (12.938849ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.352595  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1109 00:53:10.354721  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.710561ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.358483  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.918019ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.358991  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1109 00:53:10.360815  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.349432ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.363699  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.38106ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.363927  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1109 00:53:10.366272  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (2.148093ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.373138  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.4781ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.373483  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1109 00:53:10.375189  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.267556ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.377950  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.215432ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.378256  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1109 00:53:10.379793  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.35602ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.382468  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.055554ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.382895  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1109 00:53:10.384680  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.058759ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.387843  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.509093ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.388184  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1109 00:53:10.389886  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.443967ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.392678  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.251383ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.393442  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1109 00:53:10.394821  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.01656ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.396296  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.396394  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.396578  112068 httplog.go:90] GET /healthz: (1.190208ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:10.398376  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.737587ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.398596  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1109 00:53:10.400055  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.326443ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.405776  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.348166ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.407277  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1109 00:53:10.408739  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.131554ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.411744  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.331876ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.412341  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1109 00:53:10.413072  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.413202  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.413438  112068 httplog.go:90] GET /healthz: (1.865888ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.414612  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.24182ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.417538  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.331601ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.417782  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1109 00:53:10.421279  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (3.27028ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.424043  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.190254ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.424355  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1109 00:53:10.425684  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.082314ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.428424  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.007974ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.428667  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1109 00:53:10.430135  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.271157ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.432512  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.939244ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.432772  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1109 00:53:10.434018  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.043628ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.436605  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.967287ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.436846  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1109 00:53:10.438054  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (970.023µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.440593  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.830813ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.440984  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1109 00:53:10.442986  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.792303ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.445290  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.870454ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.445537  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1109 00:53:10.446912  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.054532ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.463805  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.142489ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.464093  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1109 00:53:10.465975  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.62084ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.468931  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.450817ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.469269  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1109 00:53:10.471447  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.930795ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.476322  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.38025ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.476828  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1109 00:53:10.478088  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.012531ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.481437  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.835527ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.481828  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1109 00:53:10.484172  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.388389ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.486880  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.120279ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.487268  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1109 00:53:10.488407  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (830.8µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.491091  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.089932ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.491428  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1109 00:53:10.492626  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (923.354µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.495829  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.612525ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.496307  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1109 00:53:10.497816  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.305844ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.497818  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.499186  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.499435  112068 httplog.go:90] GET /healthz: (3.844283ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:10.500190  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.978123ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.500698  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1109 00:53:10.502539  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.66175ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.505038  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.135654ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.505399  112068 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1109 00:53:10.506596  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (843.1µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.510423  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.242391ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.510898  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1109 00:53:10.515491  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (4.241716ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.517934  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.870291ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.518109  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1109 00:53:10.519721  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.418558ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.531835  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.532107  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.532328  112068 httplog.go:90] GET /healthz: (1.720722ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.534553  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.797964ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.534982  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1109 00:53:10.554941  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (2.25716ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.574071  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.508906ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.574363  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1109 00:53:10.592978  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.260607ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.596570  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.596733  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.596911  112068 httplog.go:90] GET /healthz: (1.245288ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:10.612850  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.612883  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.612940  112068 httplog.go:90] GET /healthz: (1.507132ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.613594  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.608914ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.613871  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1109 00:53:10.632283  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.343545ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.666478  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (15.553359ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.666742  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1109 00:53:10.672513  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.508634ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.694688  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.653461ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.695099  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1109 00:53:10.697752  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.697788  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.697834  112068 httplog.go:90] GET /healthz: (2.392249ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:10.714492  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.714519  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.714575  112068 httplog.go:90] GET /healthz: (3.128948ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.715031  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (3.608721ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.733789  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.752133ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.734131  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1109 00:53:10.752382  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.38822ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.773896  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.900956ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.774169  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1109 00:53:10.792839  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.298582ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.796823  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.796852  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.796910  112068 httplog.go:90] GET /healthz: (1.39525ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:10.815988  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.816013  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.816056  112068 httplog.go:90] GET /healthz: (3.620957ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.819461  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.850889ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.819760  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1109 00:53:10.832365  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.257416ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.854478  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.75825ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.854784  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1109 00:53:10.872110  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.245459ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.893930  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.854026ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.894170  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1109 00:53:10.899903  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.899933  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.899976  112068 httplog.go:90] GET /healthz: (2.757825ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:10.912654  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.912698  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.912743  112068 httplog.go:90] GET /healthz: (1.251245ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:10.914454  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (3.427991ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.933761  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.748114ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.934045  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1109 00:53:10.952471  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.486747ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.974143  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.18208ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.974607  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1109 00:53:10.992495  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.46646ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:10.996524  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:10.996551  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:10.996601  112068 httplog.go:90] GET /healthz: (1.144637ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:11.014316  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.014344  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.014381  112068 httplog.go:90] GET /healthz: (2.585221ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.015619  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.738478ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.015874  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1109 00:53:11.044713  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (11.322698ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.053314  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.383253ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.053721  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1109 00:53:11.072365  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.266365ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.093875  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.543545ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.094222  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1109 00:53:11.096769  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.096803  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.096860  112068 httplog.go:90] GET /healthz: (1.274115ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:11.112455  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.112496  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.112536  112068 httplog.go:90] GET /healthz: (1.151568ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.112803  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.876048ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.133379  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.484431ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.133638  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1109 00:53:11.152989  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.462069ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.173495  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.525274ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.173768  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1109 00:53:11.192703  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.731899ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.196562  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.196593  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.196642  112068 httplog.go:90] GET /healthz: (1.181452ms) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:11.214749  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.215156  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.215428  112068 httplog.go:90] GET /healthz: (1.767538ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.214935  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.001185ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.215925  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1109 00:53:11.232647  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.701009ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.253839  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.907946ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.254150  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1109 00:53:11.272633  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.618708ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.293842  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.887072ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.294107  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1109 00:53:11.297644  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.297676  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.297721  112068 httplog.go:90] GET /healthz: (2.136869ms) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:11.312766  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.312798  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.312861  112068 httplog.go:90] GET /healthz: (1.320799ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.313340  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.333726ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.336497  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.498505ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.336804  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1109 00:53:11.352529  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.418284ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.373923  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.967577ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.374178  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1109 00:53:11.392441  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.499708ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.396462  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.396493  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.396552  112068 httplog.go:90] GET /healthz: (1.076138ms) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:11.413447  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.413490  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.413529  112068 httplog.go:90] GET /healthz: (1.549795ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.413644  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.733112ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.414100  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1109 00:53:11.432124  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.21531ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.454329  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.342963ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.454742  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1109 00:53:11.472561  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.61055ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.493477  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.559877ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.493753  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1109 00:53:11.496574  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.496604  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.496641  112068 httplog.go:90] GET /healthz: (1.206367ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:11.512851  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.512883  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.512938  112068 httplog.go:90] GET /healthz: (1.360265ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.512998  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.053696ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.533716  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.76287ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.533974  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1109 00:53:11.552734  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.631492ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.573944  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.804326ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.574530  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1109 00:53:11.592444  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.427908ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.596719  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.596749  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.596795  112068 httplog.go:90] GET /healthz: (1.279977ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:11.614006  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.614034  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.614072  112068 httplog.go:90] GET /healthz: (1.614872ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.617088  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.750108ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.617407  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1109 00:53:11.633598  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (2.065192ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.654847  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.318365ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.655172  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1109 00:53:11.672605  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.598883ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.693797  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.806749ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.694070  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1109 00:53:11.696684  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.696719  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.696772  112068 httplog.go:90] GET /healthz: (1.286105ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:11.712814  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.712845  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.712900  112068 httplog.go:90] GET /healthz: (1.461613ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.712965  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.019965ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.733933  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.930551ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.734189  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1109 00:53:11.753272  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.964048ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.774137  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.901369ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.774725  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1109 00:53:11.792599  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.66693ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.796573  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.796600  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.796634  112068 httplog.go:90] GET /healthz: (1.188712ms) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:11.813971  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.058223ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.814413  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.814428  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1109 00:53:11.814460  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.814509  112068 httplog.go:90] GET /healthz: (2.714017ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.832599  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.567677ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.855737  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.643088ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.856045  112068 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1109 00:53:11.873373  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.73317ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.875900  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.042602ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.896698  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.904321ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.897289  112068 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1109 00:53:11.897747  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.897769  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.897811  112068 httplog.go:90] GET /healthz: (1.81535ms) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:11.912848  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.912886  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.912946  112068 httplog.go:90] GET /healthz: (1.431445ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.912945  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.911187ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:11.914931  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.498355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.934107  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.987576ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.934545  112068 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1109 00:53:11.953720  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.722726ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.956337  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.878799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.977066  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (5.510431ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.977393  112068 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1109 00:53:11.995140  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.578584ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:11.996623  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:11.996650  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:11.996687  112068 httplog.go:90] GET /healthz: (1.169293ms) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:11.998739  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.282518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.014687  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.655593ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.015038  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:12.015836  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:12.016024  112068 httplog.go:90] GET /healthz: (3.9036ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.016201  112068 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1109 00:53:12.042244  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (9.562534ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.044446  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.713958ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.054688  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.832455ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.054976  112068 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1109 00:53:12.074250  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.238913ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.079112  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.371452ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.093117  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.158461ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.093408  112068 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1109 00:53:12.096388  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:12.096416  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:12.096554  112068 httplog.go:90] GET /healthz: (1.058019ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:12.112257  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.314778ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.112516  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:12.112550  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:12.112603  112068 httplog.go:90] GET /healthz: (1.252367ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.114169  112068 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.206168ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.133186  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.26188ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.133453  112068 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1109 00:53:12.153538  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.569277ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.155784  112068 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.503631ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.176287  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (4.651479ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.176578  112068 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1109 00:53:12.192668  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.503796ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.194593  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.485854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.196344  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:12.196373  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:12.196415  112068 httplog.go:90] GET /healthz: (840.805µs) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:12.213007  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.123641ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.213400  112068 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1109 00:53:12.214182  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:12.214247  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:12.214285  112068 httplog.go:90] GET /healthz: (2.212229ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.231946  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.070006ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.233705  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.283761ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.255855  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.180349ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.256119  112068 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1109 00:53:12.272170  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.208986ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.273734  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.106925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.293566  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.609376ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.293850  112068 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1109 00:53:12.298368  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:12.298418  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:12.298474  112068 httplog.go:90] GET /healthz: (2.981155ms) 0 [Go-http-client/1.1 127.0.0.1:34868]
I1109 00:53:12.312770  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:12.312803  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:12.312846  112068 httplog.go:90] GET /healthz: (1.467284ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.313338  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (2.427578ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.315361  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.549901ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.333334  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.398188ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.333588  112068 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1109 00:53:12.353688  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.707769ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.355762  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.458293ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.374501  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.871849ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.374880  112068 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1109 00:53:12.395425  112068 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (4.470299ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.397331  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:12.397359  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:12.397403  112068 httplog.go:90] GET /healthz: (1.274204ms) 0 [Go-http-client/1.1 127.0.0.1:34860]
I1109 00:53:12.398764  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.827851ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.414029  112068 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:53:12.414053  112068 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:53:12.414092  112068 httplog.go:90] GET /healthz: (2.701101ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.414799  112068 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.841492ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.415000  112068 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1109 00:53:12.504410  112068 httplog.go:90] GET /healthz: (8.771646ms) 200 [Go-http-client/1.1 127.0.0.1:34868]
W1109 00:53:12.505444  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505469  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505553  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505644  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505656  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505671  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505678  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505689  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505707  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505721  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.505739  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:53:12.505765  112068 factory.go:300] Creating scheduler from algorithm provider 'DefaultProvider'
I1109 00:53:12.505777  112068 factory.go:392] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1109 00:53:12.507434  112068 reflector.go:153] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.507456  112068 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.507871  112068 reflector.go:153] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.507882  112068 reflector.go:188] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.508344  112068 reflector.go:153] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.508356  112068 reflector.go:188] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.509190  112068 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (1.157592ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.508178  112068 reflector.go:153] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.509310  112068 reflector.go:188] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.509963  112068 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (859.426µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35380]
I1109 00:53:12.510055  112068 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (749.872µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.511083  112068 reflector.go:153] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.511101  112068 reflector.go:188] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.511336  112068 reflector.go:153] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.511360  112068 reflector.go:188] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.511407  112068 reflector.go:153] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.511417  112068 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.511704  112068 reflector.go:153] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.511716  112068 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.512518  112068 reflector.go:153] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.512535  112068 reflector.go:188] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.513303  112068 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (476.728µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.513946  112068 reflector.go:153] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.513961  112068 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.514396  112068 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.514413  112068 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.516522  112068 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (6.609331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35382]
I1109 00:53:12.517999  112068 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (528.29µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35390]
I1109 00:53:12.518051  112068 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (660.768µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35382]
I1109 00:53:12.519870  112068 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (4.159816ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35386]
I1109 00:53:12.519991  112068 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=31109 labels= fields= timeout=7m28s
I1109 00:53:12.521409  112068 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (673.187µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34860]
I1109 00:53:12.523814  112068 get.go:251] Starting watch for /api/v1/services, rv=31109 labels= fields= timeout=8m33s
I1109 00:53:12.524413  112068 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (12.130166ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:53:12.527818  112068 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (12.630886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35388]
I1109 00:53:12.534811  112068 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=31104 labels= fields= timeout=7m29s
I1109 00:53:12.535413  112068 get.go:251] Starting watch for /api/v1/nodes, rv=31108 labels= fields= timeout=6m26s
I1109 00:53:12.535898  112068 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=31104 labels= fields= timeout=9m32s
I1109 00:53:12.536200  112068 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=31109 labels= fields= timeout=7m2s
I1109 00:53:12.537539  112068 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=31109 labels= fields= timeout=9m52s
I1109 00:53:12.538950  112068 httplog.go:90] GET /healthz: (18.843919ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I1109 00:53:12.540413  112068 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=31109 labels= fields= timeout=7m15s
I1109 00:53:12.541421  112068 httplog.go:90] GET /api/v1/namespaces/default: (2.09982ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I1109 00:53:12.544806  112068 get.go:251] Starting watch for /api/v1/pods, rv=31109 labels= fields= timeout=9m45s
I1109 00:53:12.550035  112068 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=31110 labels= fields= timeout=8m45s
I1109 00:53:12.553176  112068 httplog.go:90] POST /api/v1/namespaces: (10.603156ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I1109 00:53:12.556673  112068 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=31109 labels= fields= timeout=7m40s
I1109 00:53:12.557782  112068 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.879203ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I1109 00:53:12.568365  112068 httplog.go:90] POST /api/v1/namespaces/default/services: (9.277628ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I1109 00:53:12.572066  112068 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.852111ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I1109 00:53:12.583122  112068 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (9.797841ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I1109 00:53:12.607038  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607095  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607102  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607109  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607115  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607122  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607128  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607148  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607154  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607165  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607171  112068 shared_informer.go:227] caches populated
I1109 00:53:12.607732  112068 plugins.go:631] Loaded volume plugin "kubernetes.io/mock-provisioner"
W1109 00:53:12.607944  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.608111  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.608307  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.608405  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:53:12.608555  112068 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:53:12.608978  112068 shared_informer.go:227] caches populated
I1109 00:53:12.609141  112068 pv_controller_base.go:289] Starting persistent volume controller
I1109 00:53:12.609241  112068 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1109 00:53:12.609615  112068 reflector.go:153] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.609724  112068 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.610448  112068 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.610483  112068 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.610602  112068 reflector.go:153] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.610614  112068 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.610991  112068 reflector.go:153] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.611002  112068 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.611263  112068 reflector.go:153] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.611276  112068 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I1109 00:53:12.611725  112068 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (1.422081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I1109 00:53:12.612898  112068 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (658.504µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35420]
I1109 00:53:12.613043  112068 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (357.849µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I1109 00:53:12.613535  112068 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (384.608µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I1109 00:53:12.613899  112068 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=31104 labels= fields= timeout=7m24s
I1109 00:53:12.613934  112068 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (985.344µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35418]
I1109 00:53:12.616450  112068 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=31109 labels= fields= timeout=9m45s
I1109 00:53:12.616959  112068 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=31104 labels= fields= timeout=8m42s
I1109 00:53:12.617854  112068 get.go:251] Starting watch for /api/v1/nodes, rv=31108 labels= fields= timeout=7m45s
I1109 00:53:12.619196  112068 get.go:251] Starting watch for /api/v1/pods, rv=31109 labels= fields= timeout=7m32s
I1109 00:53:12.709097  112068 shared_informer.go:227] caches populated
I1109 00:53:12.709139  112068 shared_informer.go:227] caches populated
I1109 00:53:12.709146  112068 shared_informer.go:227] caches populated
I1109 00:53:12.709151  112068 shared_informer.go:227] caches populated
I1109 00:53:12.709156  112068 shared_informer.go:227] caches populated
I1109 00:53:12.709482  112068 shared_informer.go:227] caches populated
I1109 00:53:12.709503  112068 shared_informer.go:204] Caches are synced for persistent volume 
I1109 00:53:12.709525  112068 pv_controller_base.go:160] controller initialized
I1109 00:53:12.709605  112068 pv_controller_base.go:426] resyncing PV controller
I1109 00:53:12.719380  112068 httplog.go:90] POST /api/v1/nodes: (4.546817ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I1109 00:53:12.720358  112068 node_tree.go:86] Added node "node-1" in group "" to NodeTree
I1109 00:53:12.722475  112068 httplog.go:90] POST /api/v1/nodes: (2.446862ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I1109 00:53:12.723299  112068 node_tree.go:86] Added node "node-2" in group "" to NodeTree
I1109 00:53:12.728013  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (4.948152ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I1109 00:53:12.732170  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.351148ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I1109 00:53:12.732873  112068 volume_binding_test.go:191] Running test wait cannot bind
I1109 00:53:12.735623  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.509194ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I1109 00:53:12.738786  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.597959ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I1109 00:53:12.743517  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (3.831304ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I1109 00:53:12.744269  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind", version 31465
I1109 00:53:12.744322  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:12.744341  112068 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind]: no volume found
I1109 00:53:12.744363  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind] status: set phase Pending
I1109 00:53:12.744377  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind] status: phase Pending already set
I1109 00:53:12.744799  112068 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2", Name:"pvc-w-cannotbind", UID:"42d045c2-cd5a-4688-821b-15f166a19a5c", APIVersion:"v1", ResourceVersion:"31465", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 00:53:12.749882  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (4.753547ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:12.774887  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (30.592262ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I1109 00:53:12.775670  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind
I1109 00:53:12.775699  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind
I1109 00:53:12.776016  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind" on node "node-1"
I1109 00:53:12.776042  112068 scheduler_binder.go:725] storage class "wait-j4pt" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind" does not support dynamic provisioning
I1109 00:53:12.776152  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind" on node "node-2"
I1109 00:53:12.776173  112068 scheduler_binder.go:725] storage class "wait-j4pt" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind" does not support dynamic provisioning
I1109 00:53:12.776248  112068 factory.go:632] Unable to schedule volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1109 00:53:12.776314  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1109 00:53:12.778937  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind: (2.090722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:12.782567  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (4.79711ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.783608  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind/status: (6.716421ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35434]
I1109 00:53:12.790570  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind: (2.303362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.791066  112068 generic_scheduler.go:341] Preemption will not help schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind on any node.
I1109 00:53:12.891744  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind: (13.825107ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.894003  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-cannotbind: (1.615404ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.900724  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind
I1109 00:53:12.900759  112068 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind
I1109 00:53:12.904357  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (3.239138ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:12.904635  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (9.990379ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.911636  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (6.516755ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.912406  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind" deleted
I1109 00:53:12.914399  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (2.296635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.939355  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (24.298065ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.939669  112068 volume_binding_test.go:191] Running test wait pvc prebound
I1109 00:53:12.942164  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.260264ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.948323  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (5.512902ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.957638  112068 httplog.go:90] POST /api/v1/persistentvolumes: (8.871684ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.957942  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-pvc-prebound", version 31488
I1109 00:53:12.957991  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1109 00:53:12.958012  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1109 00:53:12.958021  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1109 00:53:12.960284  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (2.111993ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.960559  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound", version 31489
I1109 00:53:12.960643  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:53:12.960660  112068 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1109 00:53:12.960680  112068 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I1109 00:53:12.960696  112068 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume is unbound, binding
I1109 00:53:12.960720  112068 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:12.960730  112068 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:12.960753  112068 pv_controller.go:847] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I1109 00:53:12.963657  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (5.326351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:12.964036  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 31490
I1109 00:53:12.964069  112068 pv_controller.go:796] volume "pv-w-pvc-prebound" entered phase "Available"
I1109 00:53:12.964094  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 31490
I1109 00:53:12.964109  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1109 00:53:12.964132  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1109 00:53:12.964139  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1109 00:53:12.964162  112068 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I1109 00:53:12.964357  112068 store.go:365] GuaranteedUpdate of /cc08644e-00cc-4109-8d71-8e808dc3f283/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I1109 00:53:12.964743  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (3.17451ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35534]
I1109 00:53:12.965039  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (4.303457ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35444]
I1109 00:53:12.965091  112068 pv_controller.go:850] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:53:12.965113  112068 pv_controller.go:932] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:53:12.965128  112068 pv_controller_base.go:251] could not sync claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:53:12.965206  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
I1109 00:53:12.965243  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
E1109 00:53:12.965442  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:12.965478  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:12.965515  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1109 00:53:12.974325  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (5.641394ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35536]
I1109 00:53:12.992427  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (24.206747ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:12.992711  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound/status: (24.445099ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35534]
E1109 00:53:12.997657  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:13.067696  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.995965ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:13.183410  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.920527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:13.268131  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.209167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:13.367788  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.017509ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:13.467597  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.851525ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:13.567681  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.951477ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:13.667638  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.908987ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:13.769188  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.435296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:13.867707  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.987026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:13.967732  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.959957ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.070282  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.872925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.168181  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.13721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.270070  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (4.022038ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.377243  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (11.42669ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.484549  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.913385ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.567390  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.742177ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.669972  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (4.266941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.767514  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.76585ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.867665  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.939841ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:14.967670  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.925879ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.068286  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.854595ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.167482  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.762086ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.268781  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.997677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.368015  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.24245ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.468497  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.429162ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.567992  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.863173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.668190  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.961536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.768237  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.479724ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.867669  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.926569ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:15.967842  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.05076ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.067326  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.675262ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.209206  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (43.438879ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.267887  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.050372ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.367319  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.563034ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.471795  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (5.970223ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.570169  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.184415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.667867  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.115385ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.767704  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.970427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.867844  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.076635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:16.967697  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.919418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.067599  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.85975ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.167886  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.0663ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.267616  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.860854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.368315  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.242214ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.467924  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.130398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.569111  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.411145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.667877  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.107542ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.768160  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.377328ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.868004  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.236502ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:17.967782  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.049528ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.067707  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.020582ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.167674  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.91276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.267888  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.126106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.367904  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.116316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.467736  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.995771ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.569466  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.250635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.667690  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.930853ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.767787  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.065678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.867986  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.162511ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:18.967457  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.75048ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.067445  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.781369ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.167556  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.833903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.267818  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.077995ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.367785  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.951409ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.469679  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.911099ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.568966  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.225962ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.667403  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.664929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.770194  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (4.128844ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.867510  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.771248ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:19.967488  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.743354ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.067565  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.857421ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.167609  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.881686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.271018  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (5.236512ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.367559  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.81696ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.467720  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.807677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.570348  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.778931ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.668962  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.230404ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.767407  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.614858ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.868062  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.214729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:20.968786  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.837078ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.117632  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (51.770797ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.167610  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.85654ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.269684  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.730165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.367364  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.619906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.468185  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.69204ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.569338  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.670457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.667632  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.665784ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.767294  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.567081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.873668  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (7.749893ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:21.973269  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (7.527872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.067455  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.764262ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.167349  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.574294ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.267461  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.669577ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.368037  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.606181ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.470616  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (4.830603ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.541127  112068 httplog.go:90] GET /api/v1/namespaces/default: (1.322096ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.542560  112068 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.121976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.544254  112068 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.368677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.568189  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.480385ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.667152  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.410096ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.767890  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.110755ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.867266  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.591085ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:22.968169  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.381463ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.067492  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.775571ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.167592  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.835765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.267670  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.930981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.368568  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.733959ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.468817  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.81145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.568225  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.527622ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.668607  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.880218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.770603  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.905535ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.867254  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.543612ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:23.983751  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (17.90848ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.067280  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.558741ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.167701  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.920264ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.267778  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.011567ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.368072  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.255753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.467481  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.722092ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.580771  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.747851ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.669005  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.254638ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.767643  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.912337ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.867572  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.781918ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:24.967572  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.81669ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.067684  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.962632ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.167452  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.704553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.267693  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.900841ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.371713  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.103724ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.467864  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.100073ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.585409  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (19.56269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.669332  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.447305ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.767775  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.999253ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.868789  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.115906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:25.967826  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.986782ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.068893  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.188169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.167594  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.895397ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.268021  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.283767ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.367878  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.101458ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.467830  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.015119ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.568078  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.268392ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.668270  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.125325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.767967  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.11114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.867768  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.99256ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:26.969986  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (4.202935ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.068002  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.261209ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.167862  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.037737ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.267912  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.084392ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.368353  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.232372ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.469162  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.404967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.567145  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.362573ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.667716  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.970352ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.709831  112068 pv_controller_base.go:426] resyncing PV controller
I1109 00:53:27.709945  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 31490
I1109 00:53:27.709986  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1109 00:53:27.710007  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1109 00:53:27.710018  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1109 00:53:27.710028  112068 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I1109 00:53:27.710055  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" with version 31489
I1109 00:53:27.710072  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:53:27.710086  112068 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1109 00:53:27.710102  112068 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1109 00:53:27.710119  112068 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume is unbound, binding
I1109 00:53:27.710143  112068 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:27.710153  112068 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:27.710196  112068 pv_controller.go:847] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I1109 00:53:27.713496  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
I1109 00:53:27.713516  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
E1109 00:53:27.713683  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:27.713725  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:27.713744  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 00:53:27.713757  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:27.714373  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32619
I1109 00:53:27.714405  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound (uid: 9f569f18-1e7e-477d-9e11-3dc835b8fa4c)", boundByController: true
I1109 00:53:27.714413  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound
I1109 00:53:27.714427  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:53:27.714436  112068 pv_controller.go:617] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1109 00:53:27.714445  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 00:53:27.717535  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.834807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:27.718606  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (3.706609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:27.718867  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (8.086632ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.719725  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32619
I1109 00:53:27.719744  112068 pv_controller.go:860] updating PersistentVolume[pv-w-pvc-prebound]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:27.719753  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 00:53:27.719966  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32620
I1109 00:53:27.719984  112068 pv_controller.go:796] volume "pv-w-pvc-prebound" entered phase "Bound"
I1109 00:53:27.719992  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (3.948599ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35536]
I1109 00:53:27.720014  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32620
I1109 00:53:27.720034  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound (uid: 9f569f18-1e7e-477d-9e11-3dc835b8fa4c)", boundByController: true
I1109 00:53:27.720042  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound
I1109 00:53:27.720057  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:53:27.720070  112068 pv_controller.go:617] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1109 00:53:27.720077  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 00:53:27.720083  112068 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1109 00:53:27.721646  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (1.487028ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35440]
I1109 00:53:27.721839  112068 pv_controller.go:788] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:53:27.721854  112068 pv_controller.go:938] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound": failed saving the volume status: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:53:27.721868  112068 pv_controller_base.go:251] could not sync claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:53:27.767758  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.94525ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:27.867689  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.948651ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:27.967965  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.275747ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.067610  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.929849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.167458  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.740942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.267700  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.953296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.367591  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.837953ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.467777  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.945802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.567918  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.065401ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.667596  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.851269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.767774  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.991689ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.868162  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.385251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:28.967802  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.974765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.067646  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.92263ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.167628  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.882175ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.267778  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.030849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.367836  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.119052ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.467721  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.917466ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.512363  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
I1109 00:53:29.512404  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
E1109 00:53:29.512608  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:29.512667  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:29.512703  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 00:53:29.512738  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:29.516510  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.820118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:29.517785  112068 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events/pod-w-pvc-prebound.15d5585e7ed253f2: (3.343522ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.567754  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.038453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.669248  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.420933ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.767475  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.747333ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.867583  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.794152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:29.967196  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.429135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.067467  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.758814ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.168649  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.90449ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.268128  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.059846ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.368455  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.742314ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.471303  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (5.577292ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.567541  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.841134ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.673383  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (7.631584ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.767854  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.724286ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.867578  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.811913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:30.967562  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.809675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.067690  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.888816ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.167826  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.090097ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.267514  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.800273ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.369717  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.881404ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.468577  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.831525ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.567899  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.818948ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.668021  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.244231ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.767719  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.955127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.867687  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.857965ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:31.967777  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.017826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.073279  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (7.354183ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.167780  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.637149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.267864  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.113678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.368037  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.263801ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.468324  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.532512ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.542319  112068 httplog.go:90] GET /api/v1/namespaces/default: (1.995471ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.545083  112068 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.165866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.547626  112068 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.917096ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.567890  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.077123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.667505  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.785738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.767615  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.865433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.867547  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.760251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:32.967555  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.754586ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.068889  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.187947ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.168740  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.007978ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.267799  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.948572ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.368166  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.34476ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.467566  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.773916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.567688  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.938687ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.667625  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.850838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.767375  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.597152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.867821  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.915445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:33.967731  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.910059ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.068085  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.250079ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.167848  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.047072ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.278139  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.31343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.367777  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.982062ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.467929  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.253778ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.567727  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.991333ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.672646  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (4.496171ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.767566  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.788865ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.868008  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.204694ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:34.967639  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.921492ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.068025  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.290642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.167648  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.832288ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.267985  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.096821ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.367986  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.110223ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.467874  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.004286ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.567818  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.937359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.667794  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.912423ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.767796  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.992463ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.867870  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.006134ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:35.967861  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.060472ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.068669  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.871528ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.168547  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.817377ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.272774  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (6.95458ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.367615  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.876199ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.468232  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.420294ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.568059  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.250679ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.667537  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.794576ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.767748  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.864287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.867652  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.904218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:36.969771  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (4.038859ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.067992  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.085675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.168012  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.258368ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.268301  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.427974ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.370792  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (4.930716ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.507285  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (41.515879ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.573490  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (6.813384ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.667507  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.823402ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.767724  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.921063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.867512  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.797303ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:37.968694  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.948683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.067846  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.114527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.167606  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.822908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.268176  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.16211ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.370067  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (4.063061ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.467887  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.024842ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.567920  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.029552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.668051  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.981969ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.768510  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.320014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.872016  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.987097ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:38.971605  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.132918ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.070589  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.827169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.168124  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.360034ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.269252  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.940231ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.368179  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.269976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.474226  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (8.470194ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.568199  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.408878ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.667442  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.661989ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.767509  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.764389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.869530  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.808621ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:39.967581  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.790202ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.067750  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.975561ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.170156  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.755738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.268173  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.363376ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.368852  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.398557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.467512  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.731631ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.567783  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.088799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.667422  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.670329ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.767817  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.955633ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.869543  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.835621ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:40.980712  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (14.993134ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.067804  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.04355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.169559  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.805599ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.267556  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.789332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.367228  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.448313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.467509  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.768367ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.567677  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.87519ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.668873  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.120473ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.767446  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.68041ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.867757  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.997411ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:41.967982  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.171792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.067604  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.924386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.168366  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.54515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.268848  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.045445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.368204  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.276903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.468281  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.395327ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.542079  112068 httplog.go:90] GET /api/v1/namespaces/default: (1.585145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.545041  112068 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.428157ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.547011  112068 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.422627ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.567779  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.899336ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.667548  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.802433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.710089  112068 pv_controller_base.go:426] resyncing PV controller
I1109 00:53:42.710184  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32620
I1109 00:53:42.710287  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound (uid: 9f569f18-1e7e-477d-9e11-3dc835b8fa4c)", boundByController: true
I1109 00:53:42.710301  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound
I1109 00:53:42.710325  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:53:42.710340  112068 pv_controller.go:617] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1109 00:53:42.710349  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 00:53:42.710359  112068 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1109 00:53:42.710381  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" with version 31489
I1109 00:53:42.710395  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:53:42.710426  112068 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1109 00:53:42.710454  112068 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound (uid: 9f569f18-1e7e-477d-9e11-3dc835b8fa4c)", boundByController: true
I1109 00:53:42.710469  112068 pv_controller.go:388] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume already bound, finishing the binding
I1109 00:53:42.710479  112068 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:42.710490  112068 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:42.710518  112068 pv_controller.go:839] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:42.710527  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 00:53:42.710535  112068 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1109 00:53:42.710544  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1109 00:53:42.710561  112068 pv_controller.go:899] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:42.713528  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-prebound: (2.548009ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.714333  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
I1109 00:53:42.714357  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
E1109 00:53:42.714548  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:42.714612  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:42.714650  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 00:53:42.714666  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:42.715174  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" with version 33925
I1109 00:53:42.715225  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: bound to "pv-w-pvc-prebound"
I1109 00:53:42.715240  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound] status: set phase Bound
I1109 00:53:42.717160  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.706469ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:42.717378  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-prebound/status: (1.810957ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.717635  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" with version 33926
I1109 00:53:42.717678  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" entered phase "Bound"
I1109 00:53:42.717696  112068 pv_controller.go:955] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:42.717722  112068 pv_controller.go:956] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound (uid: 9f569f18-1e7e-477d-9e11-3dc835b8fa4c)", boundByController: true
I1109 00:53:42.717742  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1109 00:53:42.717779  112068 pv_controller_base.go:533] storeObjectUpdate: ignoring claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" version 33925
I1109 00:53:42.717969  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" with version 33926
I1109 00:53:42.717990  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1109 00:53:42.718042  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: volume "pv-w-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound (uid: 9f569f18-1e7e-477d-9e11-3dc835b8fa4c)", boundByController: true
I1109 00:53:42.718062  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: claim is already correctly bound
I1109 00:53:42.718078  112068 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:42.718088  112068 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:42.718106  112068 pv_controller.go:839] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:42.718120  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 00:53:42.718129  112068 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1109 00:53:42.718139  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1109 00:53:42.718156  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound]: already bound to "pv-w-pvc-prebound"
I1109 00:53:42.718169  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound] status: set phase Bound
I1109 00:53:42.718186  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound] status: phase Bound already set
I1109 00:53:42.718202  112068 pv_controller.go:955] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound"
I1109 00:53:42.718238  112068 pv_controller.go:956] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound (uid: 9f569f18-1e7e-477d-9e11-3dc835b8fa4c)", boundByController: true
I1109 00:53:42.718258  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1109 00:53:42.767948  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (2.164497ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.868804  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (3.067584ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.967440  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.648379ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.969428  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pvc-prebound: (1.430975ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.971323  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-prebound: (1.345603ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.973145  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.447488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.978405  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
I1109 00:53:42.978456  112068 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pvc-prebound
I1109 00:53:42.981188  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (7.532621ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.981901  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (3.115679ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:42.985667  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (3.910255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.986397  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" deleted
I1109 00:53:42.986442  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32620
I1109 00:53:42.986476  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound (uid: 9f569f18-1e7e-477d-9e11-3dc835b8fa4c)", boundByController: true
I1109 00:53:42.986486  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound
I1109 00:53:42.987693  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-prebound: (1.004054ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:42.987919  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound not found
I1109 00:53:42.987938  112068 pv_controller.go:573] volume "pv-w-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1109 00:53:42.987951  112068 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Released
I1109 00:53:42.990581  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.368587ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:42.990769  112068 store.go:231] deletion of /cc08644e-00cc-4109-8d71-8e808dc3f283/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I1109 00:53:42.991079  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 33974
I1109 00:53:42.991109  112068 pv_controller.go:796] volume "pv-w-pvc-prebound" entered phase "Released"
I1109 00:53:42.991119  112068 pv_controller.go:1009] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I1109 00:53:42.991143  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 33974
I1109 00:53:42.991166  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Released, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound (uid: 9f569f18-1e7e-477d-9e11-3dc835b8fa4c)", boundByController: true
I1109 00:53:42.991178  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound
I1109 00:53:42.991204  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound not found
I1109 00:53:42.991224  112068 pv_controller.go:1009] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I1109 00:53:42.992348  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.04031ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:42.992711  112068 pv_controller_base.go:216] volume "pv-w-pvc-prebound" deleted
I1109 00:53:42.992752  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-prebound" was already processed
I1109 00:53:42.999781  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.058632ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:43.000462  112068 volume_binding_test.go:191] Running test wait can bind two
I1109 00:53:43.002532  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.793563ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:43.004914  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.744697ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:43.007685  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-2", version 33981
I1109 00:53:43.007726  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1109 00:53:43.007748  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1109 00:53:43.007757  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1109 00:53:43.007394  112068 httplog.go:90] POST /api/v1/persistentvolumes: (1.896235ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:43.011548  112068 httplog.go:90] POST /api/v1/persistentvolumes: (2.927363ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:43.011553  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (3.543719ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.011781  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 33982
I1109 00:53:43.011808  112068 pv_controller.go:796] volume "pv-w-canbind-2" entered phase "Available"
I1109 00:53:43.012524  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 33982
I1109 00:53:43.012560  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "", boundByController: false
I1109 00:53:43.012581  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1109 00:53:43.012590  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1109 00:53:43.012599  112068 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-2]: phase Available already set
I1109 00:53:43.012614  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-3", version 33983
I1109 00:53:43.012627  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Pending, bound to: "", boundByController: false
I1109 00:53:43.012649  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1109 00:53:43.012655  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1109 00:53:43.014105  112068 httplog.go:90] POST /api/v1/persistentvolumes: (2.11663ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:43.015947  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (3.049052ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.016512  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 33985
I1109 00:53:43.016545  112068 pv_controller.go:796] volume "pv-w-canbind-3" entered phase "Available"
I1109 00:53:43.016571  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-5", version 33984
I1109 00:53:43.016597  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Pending, bound to: "", boundByController: false
I1109 00:53:43.016616  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1109 00:53:43.016622  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1109 00:53:43.017273  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (2.489155ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:43.017579  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2", version 33986
I1109 00:53:43.017600  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:43.017682  112068 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: no volume found
I1109 00:53:43.017702  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2] status: set phase Pending
I1109 00:53:43.017712  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2] status: phase Pending already set
I1109 00:53:43.017751  112068 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2", Name:"pvc-w-canbind-2", UID:"d1f5fcfa-8328-46f4-b1f4-fce9418cb04a", APIVersion:"v1", ResourceVersion:"33986", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 00:53:43.019090  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-5/status: (2.156696ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.019451  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 33987
I1109 00:53:43.019475  112068 pv_controller.go:796] volume "pv-w-canbind-5" entered phase "Available"
I1109 00:53:43.019500  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 33985
I1109 00:53:43.019519  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "", boundByController: false
I1109 00:53:43.019539  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1109 00:53:43.019545  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1109 00:53:43.019553  112068 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-3]: phase Available already set
I1109 00:53:43.019565  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 33987
I1109 00:53:43.019578  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Available, bound to: "", boundByController: false
I1109 00:53:43.019598  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1109 00:53:43.019603  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1109 00:53:43.019611  112068 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-5]: phase Available already set
I1109 00:53:43.020124  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3", version 33988
I1109 00:53:43.020150  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:43.020186  112068 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: no volume found
I1109 00:53:43.020205  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3] status: set phase Pending
I1109 00:53:43.020236  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3] status: phase Pending already set
I1109 00:53:43.020278  112068 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2", Name:"pvc-w-canbind-3", UID:"28ff4021-640f-4ceb-8946-2e6824977d61", APIVersion:"v1", ResourceVersion:"33988", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 00:53:43.020760  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.152645ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:43.022467  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (4.804852ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:43.023483  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (1.769721ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:43.025084  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (2.019449ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38954]
I1109 00:53:43.026193  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind-2
I1109 00:53:43.026289  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind-2
I1109 00:53:43.026689  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind-2", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" on node "node-1"
I1109 00:53:43.026766  112068 scheduler_binder.go:725] storage class "wait-qlh6" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" does not support dynamic provisioning
I1109 00:53:43.026897  112068 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind-2" on node "node-2"
I1109 00:53:43.027164  112068 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind-2", node "node-2"
I1109 00:53:43.027245  112068 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-2", version 33982
I1109 00:53:43.027283  112068 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-3", version 33985
I1109 00:53:43.027410  112068 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind-2", node "node-2"
I1109 00:53:43.027427  112068 scheduler_binder.go:404] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" bound to volume "pv-w-canbind-2"
I1109 00:53:43.033167  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2: (5.374716ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:43.033582  112068 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-2]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.033624  112068 scheduler_binder.go:404] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" bound to volume "pv-w-canbind-3"
I1109 00:53:43.033961  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 33992
I1109 00:53:43.034009  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 (uid: d1f5fcfa-8328-46f4-b1f4-fce9418cb04a)", boundByController: true
I1109 00:53:43.034023  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2
I1109 00:53:43.034042  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:43.034056  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:53:43.034102  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" with version 33986
I1109 00:53:43.034116  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:43.034152  112068 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: volume "pv-w-canbind-2" found: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 (uid: d1f5fcfa-8328-46f4-b1f4-fce9418cb04a)", boundByController: true
I1109 00:53:43.034163  112068 pv_controller.go:929] binding volume "pv-w-canbind-2" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.034176  112068 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.034191  112068 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.034205  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I1109 00:53:43.037537  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (2.931298ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.037825  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 33994
I1109 00:53:43.037859  112068 pv_controller.go:796] volume "pv-w-canbind-2" entered phase "Bound"
I1109 00:53:43.037895  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: binding to "pv-w-canbind-2"
I1109 00:53:43.037907  112068 pv_controller.go:899] volume "pv-w-canbind-2" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.038223  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 33994
I1109 00:53:43.038356  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 (uid: d1f5fcfa-8328-46f4-b1f4-fce9418cb04a)", boundByController: true
I1109 00:53:43.038515  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2
I1109 00:53:43.038654  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:43.038775  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:53:43.038559  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3: (4.219208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:43.039438  112068 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-3]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.039371  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 33995
I1109 00:53:43.039500  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 (uid: 28ff4021-640f-4ceb-8946-2e6824977d61)", boundByController: true
I1109 00:53:43.039513  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3
I1109 00:53:43.039529  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:43.039555  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:53:43.041708  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-2: (3.519929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.042179  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" with version 33996
I1109 00:53:43.042270  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: bound to "pv-w-canbind-2"
I1109 00:53:43.042283  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2] status: set phase Bound
I1109 00:53:43.044889  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-2/status: (2.226732ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.045116  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" with version 33997
I1109 00:53:43.045141  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" entered phase "Bound"
I1109 00:53:43.045156  112068 pv_controller.go:955] volume "pv-w-canbind-2" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.045173  112068 pv_controller.go:956] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 (uid: d1f5fcfa-8328-46f4-b1f4-fce9418cb04a)", boundByController: true
I1109 00:53:43.045184  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1109 00:53:43.045241  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" with version 33988
I1109 00:53:43.045252  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:43.045279  112068 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: volume "pv-w-canbind-3" found: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 (uid: 28ff4021-640f-4ceb-8946-2e6824977d61)", boundByController: true
I1109 00:53:43.045286  112068 pv_controller.go:929] binding volume "pv-w-canbind-3" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.045294  112068 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.045306  112068 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.045313  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I1109 00:53:43.047978  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (2.333916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.048477  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 33998
I1109 00:53:43.048528  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 (uid: 28ff4021-640f-4ceb-8946-2e6824977d61)", boundByController: true
I1109 00:53:43.048542  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3
I1109 00:53:43.048584  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:43.048602  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:53:43.048634  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 33998
I1109 00:53:43.048659  112068 pv_controller.go:796] volume "pv-w-canbind-3" entered phase "Bound"
I1109 00:53:43.048674  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: binding to "pv-w-canbind-3"
I1109 00:53:43.048693  112068 pv_controller.go:899] volume "pv-w-canbind-3" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.051786  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-3: (2.668063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.052110  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" with version 33999
I1109 00:53:43.052147  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: bound to "pv-w-canbind-3"
I1109 00:53:43.052157  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3] status: set phase Bound
I1109 00:53:43.055672  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-3/status: (3.06809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.056005  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" with version 34000
I1109 00:53:43.056038  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" entered phase "Bound"
I1109 00:53:43.056059  112068 pv_controller.go:955] volume "pv-w-canbind-3" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.056105  112068 pv_controller.go:956] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 (uid: 28ff4021-640f-4ceb-8946-2e6824977d61)", boundByController: true
I1109 00:53:43.056122  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1109 00:53:43.056181  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" with version 33997
I1109 00:53:43.056206  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1109 00:53:43.056264  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: volume "pv-w-canbind-2" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 (uid: d1f5fcfa-8328-46f4-b1f4-fce9418cb04a)", boundByController: true
I1109 00:53:43.056274  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: claim is already correctly bound
I1109 00:53:43.056301  112068 pv_controller.go:929] binding volume "pv-w-canbind-2" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.056313  112068 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.056337  112068 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.056348  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I1109 00:53:43.056357  112068 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-2]: phase Bound already set
I1109 00:53:43.056488  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: binding to "pv-w-canbind-2"
I1109 00:53:43.056510  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2]: already bound to "pv-w-canbind-2"
I1109 00:53:43.056538  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2] status: set phase Bound
I1109 00:53:43.056571  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2] status: phase Bound already set
I1109 00:53:43.056618  112068 pv_controller.go:955] volume "pv-w-canbind-2" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2"
I1109 00:53:43.056652  112068 pv_controller.go:956] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 (uid: d1f5fcfa-8328-46f4-b1f4-fce9418cb04a)", boundByController: true
I1109 00:53:43.056672  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1109 00:53:43.056721  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" with version 34000
I1109 00:53:43.056743  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1109 00:53:43.056775  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: volume "pv-w-canbind-3" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 (uid: 28ff4021-640f-4ceb-8946-2e6824977d61)", boundByController: true
I1109 00:53:43.056789  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: claim is already correctly bound
I1109 00:53:43.056799  112068 pv_controller.go:929] binding volume "pv-w-canbind-3" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.056818  112068 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.056861  112068 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.056878  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I1109 00:53:43.056887  112068 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-3]: phase Bound already set
I1109 00:53:43.056896  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: binding to "pv-w-canbind-3"
I1109 00:53:43.056948  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3]: already bound to "pv-w-canbind-3"
I1109 00:53:43.056964  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3] status: set phase Bound
I1109 00:53:43.057017  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3] status: phase Bound already set
I1109 00:53:43.057035  112068 pv_controller.go:955] volume "pv-w-canbind-3" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3"
I1109 00:53:43.057059  112068 pv_controller.go:956] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 (uid: 28ff4021-640f-4ceb-8946-2e6824977d61)", boundByController: true
I1109 00:53:43.057073  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1109 00:53:43.128655  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (1.934322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.227965  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (1.902931ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.328492  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (2.41243ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.427961  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (1.927647ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.517104  112068 cache.go:656] Couldn't expire cache for pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind-2. Binding is still in progress.
I1109 00:53:43.527867  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (1.781601ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.627823  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (1.879117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.728320  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (2.282438ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.827796  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (1.790621ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:43.927677  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (1.655871ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.027924  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (1.927749ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.039738  112068 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind-2" are bound
I1109 00:53:44.039846  112068 factory.go:698] Attempting to bind pod-w-canbind-2 to node-2
I1109 00:53:44.044019  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2/binding: (3.650457ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.044473  112068 scheduler.go:756] pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind-2 is bound successfully on node "node-2", 2 nodes evaluated, 1 nodes were found feasible.
I1109 00:53:44.047410  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.530143ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.127828  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind-2: (1.797757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.129774  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-2: (1.336331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.131508  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-3: (1.28777ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.133315  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-2: (1.384638ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.135385  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-3: (1.469894ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.136889  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-5: (1.092275ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.143954  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (6.625388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.151835  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" deleted
I1109 00:53:44.151905  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 33994
I1109 00:53:44.151957  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 (uid: d1f5fcfa-8328-46f4-b1f4-fce9418cb04a)", boundByController: true
I1109 00:53:44.151969  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2
I1109 00:53:44.153649  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-2: (1.350895ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.153942  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 not found
I1109 00:53:44.153972  112068 pv_controller.go:573] volume "pv-w-canbind-2" is released and reclaim policy "Retain" will be executed
I1109 00:53:44.154093  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Released
I1109 00:53:44.157338  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (2.807997ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.157718  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 34039
I1109 00:53:44.157808  112068 pv_controller.go:796] volume "pv-w-canbind-2" entered phase "Released"
I1109 00:53:44.157845  112068 pv_controller.go:1009] reclaimVolume[pv-w-canbind-2]: policy is Retain, nothing to do
I1109 00:53:44.157873  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 34039
I1109 00:53:44.157898  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Released, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 (uid: d1f5fcfa-8328-46f4-b1f4-fce9418cb04a)", boundByController: true
I1109 00:53:44.157908  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2
I1109 00:53:44.157928  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2 not found
I1109 00:53:44.157935  112068 pv_controller.go:1009] reclaimVolume[pv-w-canbind-2]: policy is Retain, nothing to do
I1109 00:53:44.158131  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (13.66627ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.158320  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" deleted
I1109 00:53:44.158356  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 33998
I1109 00:53:44.158375  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 (uid: 28ff4021-640f-4ceb-8946-2e6824977d61)", boundByController: true
I1109 00:53:44.158442  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3
I1109 00:53:44.159536  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-3: (874.391µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.159744  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 not found
I1109 00:53:44.159788  112068 pv_controller.go:573] volume "pv-w-canbind-3" is released and reclaim policy "Retain" will be executed
I1109 00:53:44.159803  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Released
I1109 00:53:44.162293  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (2.123127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.162486  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 34042
I1109 00:53:44.162517  112068 pv_controller.go:796] volume "pv-w-canbind-3" entered phase "Released"
I1109 00:53:44.162527  112068 pv_controller.go:1009] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I1109 00:53:44.162697  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 34042
I1109 00:53:44.162724  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Released, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 (uid: 28ff4021-640f-4ceb-8946-2e6824977d61)", boundByController: true
I1109 00:53:44.162734  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3
I1109 00:53:44.162747  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3 not found
I1109 00:53:44.162752  112068 pv_controller.go:1009] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I1109 00:53:44.164349  112068 pv_controller_base.go:216] volume "pv-w-canbind-2" deleted
I1109 00:53:44.164381  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-2" was already processed
I1109 00:53:44.167973  112068 pv_controller_base.go:216] volume "pv-w-canbind-3" deleted
I1109 00:53:44.168008  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-3" was already processed
I1109 00:53:44.173939  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (15.124236ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.174375  112068 pv_controller_base.go:216] volume "pv-w-canbind-5" deleted
I1109 00:53:44.181347  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.613453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.181492  112068 volume_binding_test.go:191] Running test wait cannot bind two
I1109 00:53:44.183110  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.452373ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.186037  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.213802ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.188573  112068 httplog.go:90] POST /api/v1/persistentvolumes: (2.071689ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.188583  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-1", version 34050
I1109 00:53:44.188970  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Pending, bound to: "", boundByController: false
I1109 00:53:44.188991  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1109 00:53:44.189000  112068 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1109 00:53:44.191154  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-1/status: (1.934866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.191557  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 34051
I1109 00:53:44.191592  112068 pv_controller.go:796] volume "pv-w-cannotbind-1" entered phase "Available"
I1109 00:53:44.191621  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 34051
I1109 00:53:44.191635  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Available, bound to: "", boundByController: false
I1109 00:53:44.191655  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1109 00:53:44.191662  112068 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1109 00:53:44.191673  112068 pv_controller.go:778] updating PersistentVolume[pv-w-cannotbind-1]: phase Available already set
I1109 00:53:44.193191  112068 httplog.go:90] POST /api/v1/persistentvolumes: (3.981502ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.193416  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-2", version 34052
I1109 00:53:44.193480  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Pending, bound to: "", boundByController: false
I1109 00:53:44.193503  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1109 00:53:44.193510  112068 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1109 00:53:44.195772  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-1", version 34054
I1109 00:53:44.195803  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-1]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:44.195880  112068 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-1]: no volume found
I1109 00:53:44.195951  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-1] status: set phase Pending
I1109 00:53:44.195978  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-1] status: phase Pending already set
I1109 00:53:44.196000  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (2.436914ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.196371  112068 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2", Name:"pvc-w-cannotbind-1", UID:"074af984-6a7d-462b-8377-216a97da090a", APIVersion:"v1", ResourceVersion:"34054", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 00:53:44.197677  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2/status: (3.798042ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.198444  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (1.985729ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.198495  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 34055
I1109 00:53:44.198721  112068 pv_controller.go:796] volume "pv-w-cannotbind-2" entered phase "Available"
I1109 00:53:44.198765  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 34055
I1109 00:53:44.198800  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Available, bound to: "", boundByController: false
I1109 00:53:44.198822  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1109 00:53:44.198828  112068 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1109 00:53:44.198837  112068 pv_controller.go:778] updating PersistentVolume[pv-w-cannotbind-2]: phase Available already set
I1109 00:53:44.202707  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (5.866553ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43598]
I1109 00:53:44.203530  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2", version 34057
I1109 00:53:44.203562  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:44.203592  112068 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2]: no volume found
I1109 00:53:44.203613  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2] status: set phase Pending
I1109 00:53:44.203658  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2] status: phase Pending already set
I1109 00:53:44.203890  112068 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2", Name:"pvc-w-cannotbind-2", UID:"2d7ef4b7-89d7-4a6f-8236-1d4424529fcd", APIVersion:"v1", ResourceVersion:"34057", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 00:53:44.206617  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.597496ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.206887  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (2.506495ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.207552  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2
I1109 00:53:44.207578  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2
I1109 00:53:44.207817  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2" on node "node-1"
I1109 00:53:44.207841  112068 scheduler_binder.go:725] storage class "wait-8jxk" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2" does not support dynamic provisioning
I1109 00:53:44.207930  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2" on node "node-2"
I1109 00:53:44.207949  112068 scheduler_binder.go:725] storage class "wait-8jxk" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2" does not support dynamic provisioning
I1109 00:53:44.208009  112068 factory.go:632] Unable to schedule volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1109 00:53:44.208050  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I1109 00:53:44.211071  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.158975ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.214309  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind-2: (5.476285ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I1109 00:53:44.214346  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind-2/status: (6.006159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
E1109 00:53:44.214669  112068 factory.go:673] pod is already present in the activeQ
I1109 00:53:44.216150  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind-2: (1.380363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.216416  112068 generic_scheduler.go:341] Preemption will not help schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2 on any node.
I1109 00:53:44.216509  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2
I1109 00:53:44.216536  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2
I1109 00:53:44.216795  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2" on node "node-2"
I1109 00:53:44.216800  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2" on node "node-1"
I1109 00:53:44.216813  112068 scheduler_binder.go:725] storage class "wait-8jxk" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2" does not support dynamic provisioning
I1109 00:53:44.216829  112068 scheduler_binder.go:725] storage class "wait-8jxk" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2" does not support dynamic provisioning
I1109 00:53:44.216880  112068 factory.go:632] Unable to schedule volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1109 00:53:44.216936  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I1109 00:53:44.219052  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind-2: (1.806918ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.219103  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind-2: (1.908037ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.219363  112068 generic_scheduler.go:341] Preemption will not help schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2 on any node.
I1109 00:53:44.219937  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.269491ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43604]
I1109 00:53:44.310200  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-cannotbind-2: (2.464915ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.312883  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-cannotbind-1: (1.750239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.316671  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-cannotbind-2: (3.181889ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.319198  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-1: (1.476145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.321231  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-2: (1.491479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.326545  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2
I1109 00:53:44.326588  112068 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind-2
I1109 00:53:44.327889  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (6.09873ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.330301  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (3.307093ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.334155  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-1" deleted
I1109 00:53:44.336472  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (7.338386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.336838  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-cannotbind-2" deleted
I1109 00:53:44.342424  112068 pv_controller_base.go:216] volume "pv-w-cannotbind-1" deleted
I1109 00:53:44.344595  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (7.613151ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.345283  112068 pv_controller_base.go:216] volume "pv-w-cannotbind-2" deleted
I1109 00:53:44.352114  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.721663ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.352403  112068 volume_binding_test.go:191] Running test immediate can bind
I1109 00:53:44.354293  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.638545ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.357776  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.901209ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.360356  112068 httplog.go:90] POST /api/v1/persistentvolumes: (2.006387ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.360581  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind", version 34078
I1109 00:53:44.360616  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Pending, bound to: "", boundByController: false
I1109 00:53:44.360638  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1109 00:53:44.360647  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Available
I1109 00:53:44.363186  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.25417ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.363558  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34079
I1109 00:53:44.363592  112068 pv_controller.go:796] volume "pv-i-canbind" entered phase "Available"
I1109 00:53:44.363715  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34079
I1109 00:53:44.363753  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I1109 00:53:44.363895  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1109 00:53:44.363917  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Available
I1109 00:53:44.363934  112068 pv_controller.go:778] updating PersistentVolume[pv-i-canbind]: phase Available already set
I1109 00:53:44.363986  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (3.189231ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.364881  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind", version 34080
I1109 00:53:44.364992  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:44.365071  112068 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Available, bound to: "", boundByController: false
I1109 00:53:44.365130  112068 pv_controller.go:929] binding volume "pv-i-canbind" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind"
I1109 00:53:44.365174  112068 pv_controller.go:827] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind"
I1109 00:53:44.365320  112068 pv_controller.go:847] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" bound to volume "pv-i-canbind"
I1109 00:53:44.367286  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (2.693381ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.367827  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind
I1109 00:53:44.367924  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind
E1109 00:53:44.368160  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:44.368206  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind: (2.555105ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
E1109 00:53:44.368436  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:44.368527  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:44.368593  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
I1109 00:53:44.368614  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34082
I1109 00:53:44.368683  112068 pv_controller.go:860] updating PersistentVolume[pv-i-canbind]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind"
I1109 00:53:44.368711  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1109 00:53:44.369050  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34082
I1109 00:53:44.369110  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind (uid: e4cbb790-adcd-4b33-ba00-385199b89eef)", boundByController: true
I1109 00:53:44.369171  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind
I1109 00:53:44.369244  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:44.369412  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:53:44.370968  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (1.96131ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.371941  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.314261ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43626]
I1109 00:53:44.373550  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (4.536257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38952]
I1109 00:53:44.373651  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind/status: (3.947925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43628]
I1109 00:53:44.373971  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34084
I1109 00:53:44.374009  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind (uid: e4cbb790-adcd-4b33-ba00-385199b89eef)", boundByController: true
I1109 00:53:44.374022  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind
I1109 00:53:44.374039  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:44.374053  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
E1109 00:53:44.374402  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:44.374503  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind
I1109 00:53:44.374538  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind
I1109 00:53:44.374513  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34084
I1109 00:53:44.374704  112068 pv_controller.go:796] volume "pv-i-canbind" entered phase "Bound"
I1109 00:53:44.374725  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind]: binding to "pv-i-canbind"
I1109 00:53:44.374884  112068 pv_controller.go:899] volume "pv-i-canbind" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind"
E1109 00:53:44.374962  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:44.375102  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:44.375199  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:44.375318  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
E1109 00:53:44.375588  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:44.382785  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-canbind: (6.960663ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43626]
I1109 00:53:44.383114  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" with version 34086
I1109 00:53:44.383147  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind]: bound to "pv-i-canbind"
I1109 00:53:44.383160  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind] status: set phase Bound
I1109 00:53:44.384051  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (7.074432ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:44.384695  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (8.993138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.386791  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-canbind/status: (3.344395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43626]
I1109 00:53:44.387094  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" with version 34088
I1109 00:53:44.387130  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" entered phase "Bound"
I1109 00:53:44.387148  112068 pv_controller.go:955] volume "pv-i-canbind" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind"
I1109 00:53:44.387179  112068 pv_controller.go:956] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind (uid: e4cbb790-adcd-4b33-ba00-385199b89eef)", boundByController: true
I1109 00:53:44.387224  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1109 00:53:44.387260  112068 pv_controller_base.go:533] storeObjectUpdate: ignoring claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" version 34086
I1109 00:53:44.387546  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" with version 34088
I1109 00:53:44.387577  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind]: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1109 00:53:44.387597  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind (uid: e4cbb790-adcd-4b33-ba00-385199b89eef)", boundByController: true
I1109 00:53:44.387606  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind]: claim is already correctly bound
I1109 00:53:44.387618  112068 pv_controller.go:929] binding volume "pv-i-canbind" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind"
I1109 00:53:44.387628  112068 pv_controller.go:827] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind"
I1109 00:53:44.387646  112068 pv_controller.go:839] updating PersistentVolume[pv-i-canbind]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind"
I1109 00:53:44.387723  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1109 00:53:44.387764  112068 pv_controller.go:778] updating PersistentVolume[pv-i-canbind]: phase Bound already set
I1109 00:53:44.387804  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind]: binding to "pv-i-canbind"
I1109 00:53:44.387890  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind]: already bound to "pv-i-canbind"
I1109 00:53:44.387948  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind] status: set phase Bound
I1109 00:53:44.387994  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind] status: phase Bound already set
I1109 00:53:44.388036  112068 pv_controller.go:955] volume "pv-i-canbind" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind"
I1109 00:53:44.388174  112068 pv_controller.go:956] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind (uid: e4cbb790-adcd-4b33-ba00-385199b89eef)", boundByController: true
I1109 00:53:44.388296  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1109 00:53:44.470410  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (1.971378ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.570158  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (1.701458ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.670142  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (1.595088ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.769935  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (1.367838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.870074  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (1.644367ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:44.970120  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (1.679734ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.070598  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (2.198647ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.171835  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (3.322427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.270423  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (1.941019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.370579  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (2.0678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.470460  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (1.90975ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.515030  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind
I1109 00:53:45.515067  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind
I1109 00:53:45.515435  112068 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind" match with Node "node-1"
I1109 00:53:45.515678  112068 scheduler_binder.go:653] PersistentVolume "pv-i-canbind", Node "node-2" mismatch for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind": No matching NodeSelectorTerms
I1109 00:53:45.515820  112068 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind", node "node-1"
I1109 00:53:45.515843  112068 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind", node "node-1": all PVCs bound and nothing to do
I1109 00:53:45.515951  112068 factory.go:698] Attempting to bind pod-i-canbind to node-1
I1109 00:53:45.517357  112068 cache.go:656] Couldn't expire cache for pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind. Binding is still in progress.
I1109 00:53:45.519110  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind/binding: (2.679896ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.519381  112068 scheduler.go:756] pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 00:53:45.521750  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.038024ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.570562  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-canbind: (2.051717ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.572898  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-canbind: (1.485037ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.575362  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind: (1.989337ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.582854  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (6.885753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.588081  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" deleted
I1109 00:53:45.588132  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34084
I1109 00:53:45.588143  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (4.468764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.588169  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind (uid: e4cbb790-adcd-4b33-ba00-385199b89eef)", boundByController: true
I1109 00:53:45.588180  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind
I1109 00:53:45.589569  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-canbind: (1.113529ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:45.589886  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind not found
I1109 00:53:45.590174  112068 pv_controller.go:573] volume "pv-i-canbind" is released and reclaim policy "Retain" will be executed
I1109 00:53:45.590261  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Released
I1109 00:53:45.592570  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (1.9947ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:45.592950  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34127
I1109 00:53:45.592991  112068 pv_controller.go:796] volume "pv-i-canbind" entered phase "Released"
I1109 00:53:45.593004  112068 pv_controller.go:1009] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1109 00:53:45.593030  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 34127
I1109 00:53:45.593064  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Released, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind (uid: e4cbb790-adcd-4b33-ba00-385199b89eef)", boundByController: true
I1109 00:53:45.593083  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind
I1109 00:53:45.593103  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind not found
I1109 00:53:45.593115  112068 pv_controller.go:1009] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1109 00:53:45.593185  112068 store.go:231] deletion of /cc08644e-00cc-4109-8d71-8e808dc3f283/persistentvolumes/pv-i-canbind failed because of a conflict, going to retry
I1109 00:53:45.595728  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.992868ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.596311  112068 pv_controller_base.go:216] volume "pv-i-canbind" deleted
I1109 00:53:45.596348  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind" was already processed
I1109 00:53:45.606117  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (9.627267ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.606399  112068 volume_binding_test.go:191] Running test immediate cannot bind
I1109 00:53:45.608917  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.085977ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.611317  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.832857ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.614016  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (2.173866ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.614880  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-cannotbind", version 34133
I1109 00:53:45.614942  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:45.614967  112068 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-cannotbind]: no volume found
I1109 00:53:45.614977  112068 pv_controller.go:1324] provisionClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-cannotbind]: started
E1109 00:53:45.615025  112068 pv_controller.go:1329] error finding provisioning plugin for claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-cannotbind: no volume plugin matched
I1109 00:53:45.615289  112068 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2", Name:"pvc-i-cannotbind", UID:"b2a2b466-88ab-4555-8757-5f3f5fbb2508", APIVersion:"v1", ResourceVersion:"34133", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I1109 00:53:45.617424  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (2.776275ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I1109 00:53:45.617809  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind
I1109 00:53:45.617823  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind
E1109 00:53:45.618015  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:45.618083  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:45.618114  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1109 00:53:45.618320  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.921424ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:45.620549  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (1.930424ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:45.622545  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-cannotbind: (3.070239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.622856  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-cannotbind/status: (4.427857ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
E1109 00:53:45.623049  112068 factory.go:673] pod is already present in the activeQ
E1109 00:53:45.623181  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:45.623278  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind
I1109 00:53:45.623300  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind
E1109 00:53:45.623559  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:45.623561  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:45.623656  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:45.623703  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
E1109 00:53:45.623713  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:45.626006  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (1.948637ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:45.626515  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-cannotbind: (2.3612ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.722271  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-cannotbind: (3.538208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.726167  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-cannotbind: (3.170057ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.732055  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind
I1109 00:53:45.732116  112068 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-cannotbind
I1109 00:53:45.734172  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (7.457433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.734905  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.348775ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:45.738878  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (4.080836ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.739782  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-cannotbind" deleted
I1109 00:53:45.740920  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.476745ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.748181  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.636889ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.748511  112068 volume_binding_test.go:191] Running test immediate pv prebound
I1109 00:53:45.750691  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.896643ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.752627  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.533891ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.755170  112068 httplog.go:90] POST /api/v1/persistentvolumes: (2.064961ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.755701  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-prebound", version 34163
I1109 00:53:45.755748  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: )", boundByController: false
I1109 00:53:45.755755  112068 pv_controller.go:504] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound
I1109 00:53:45.755763  112068 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Available
I1109 00:53:45.757927  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (2.173992ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.758560  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.452452ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:45.758687  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound", version 34164
I1109 00:53:45.758767  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:45.758852  112068 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Pending, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: )", boundByController: false
I1109 00:53:45.758871  112068 pv_controller.go:929] binding volume "pv-i-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:45.758884  112068 pv_controller.go:827] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:45.758923  112068 pv_controller.go:847] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I1109 00:53:45.759152  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34165
I1109 00:53:45.759183  112068 pv_controller.go:796] volume "pv-i-prebound" entered phase "Available"
I1109 00:53:45.759224  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34165
I1109 00:53:45.759256  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: )", boundByController: false
I1109 00:53:45.759266  112068 pv_controller.go:504] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound
I1109 00:53:45.759272  112068 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Available
I1109 00:53:45.759281  112068 pv_controller.go:778] updating PersistentVolume[pv-i-prebound]: phase Available already set
I1109 00:53:45.760482  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (1.982595ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.760880  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound
I1109 00:53:45.760902  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound
I1109 00:53:45.760917  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (1.76576ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
E1109 00:53:45.761077  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:45.761084  112068 pv_controller.go:850] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
E1109 00:53:45.761077  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:45.761103  112068 pv_controller.go:932] error binding volume "pv-i-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:53:45.761116  112068 pv_controller_base.go:251] could not sync claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
E1109 00:53:45.761151  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:45.761180  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I1109 00:53:45.762446  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.030144ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:45.763806  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound/status: (2.421455ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
E1109 00:53:45.764145  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:45.765951  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (4.131087ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43850]
I1109 00:53:45.863039  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.835122ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:45.963278  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.001171ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.063342  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.90725ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.163199  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.001714ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.263040  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.752576ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.363030  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.813604ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.463053  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.865827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.563234  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.946185ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.663314  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.090839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.763079  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.850959ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.863165  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.982021ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:46.962901  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.73542ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.063619  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.422656ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.163054  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.818914ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.263178  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.995476ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.363289  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.060369ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.462936  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.742843ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.563026  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.869603ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.663825  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.400293ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.763183  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.021235ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.862885  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.520636ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:47.963353  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.247773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.064006  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.81542ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.163035  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.860342ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.262853  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.729882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.363130  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.902311ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.463138  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.931182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.563568  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.294101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.663078  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.89982ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.762884  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.657547ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.863031  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.813863ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:48.963546  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.352099ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.062788  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.635687ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.163082  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.796159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.263354  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.093934ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.362925  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.718916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.463494  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.204677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.563946  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.724809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.662879  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.60796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.763155  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.736122ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.862866  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.683844ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:49.962746  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.589055ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.063177  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.856054ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.162708  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.550884ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.266958  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (5.611531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.365731  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.978007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.463871  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.65644ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.562714  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.561461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.672031  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.460665ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.762847  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.676635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.862858  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.545439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:50.962746  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.664125ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.062994  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.856379ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.163448  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.263516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.262930  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.725143ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.363118  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.945713ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.462992  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.842299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.563759  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.508393ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.662956  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.791387ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.763101  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.914152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.863278  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.088373ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:51.963082  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.903427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.063838  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.641558ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.175072  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (13.819613ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.264287  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.443589ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.363607  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.430985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.463055  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.804076ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.542279  112068 httplog.go:90] GET /api/v1/namespaces/default: (1.753419ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.544543  112068 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.831819ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.546356  112068 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.388804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.563183  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.973487ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.663294  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.954889ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.763022  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.865395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.863276  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.060707ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:52.963140  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.007792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.063312  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.10531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.164125  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.11812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.263531  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.162325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.363089  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.936063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.463455  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.266418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.563001  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.816257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.664557  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.360403ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.767130  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (5.758679ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.863028  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.853299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:53.963808  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.59582ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.062642  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.452189ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.165662  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.426718ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.263680  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.986952ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.363325  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.511766ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.465070  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.895218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.564176  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.064025ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.667745  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.643255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.764027  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.428759ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.863642  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.412846ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:54.963884  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.475798ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.063076  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.882691ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.163085  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.882162ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.270549  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (9.384926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.363486  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.074129ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.462735  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.560005ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.567407  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (5.955572ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.662732  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.513838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.763078  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.910838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.864161  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.98355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:55.962955  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.745586ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.063746  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.570414ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.163515  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.352927ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.266420  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (5.20299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.363434  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.217056ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.463528  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.345422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.563678  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.951926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.663844  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.622455ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.763068  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.912515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.864018  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.884751ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:56.963044  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.850757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.062927  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.74895ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.162897  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.726352ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.264296  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.137926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.363189  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.992801ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.462883  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.756562ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.562608  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.500738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.663027  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.825635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.710433  112068 pv_controller_base.go:426] resyncing PV controller
I1109 00:53:57.710541  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34165
I1109 00:53:57.710588  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: )", boundByController: false
I1109 00:53:57.710595  112068 pv_controller.go:504] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound
I1109 00:53:57.710603  112068 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Available
I1109 00:53:57.710612  112068 pv_controller.go:778] updating PersistentVolume[pv-i-prebound]: phase Available already set
I1109 00:53:57.710637  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" with version 34164
I1109 00:53:57.710652  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:57.710688  112068 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: )", boundByController: false
I1109 00:53:57.710702  112068 pv_controller.go:929] binding volume "pv-i-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:57.710718  112068 pv_controller.go:827] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:57.710768  112068 pv_controller.go:847] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I1109 00:53:57.714659  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (3.442354ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.715394  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35747
I1109 00:53:57.715444  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: 9f2e4701-ccb3-4cf5-93bd-9d7cc8c3040d)", boundByController: false
I1109 00:53:57.715458  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound
I1109 00:53:57.715476  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:57.715492  112068 pv_controller.go:604] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1109 00:53:57.715666  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35747
I1109 00:53:57.715682  112068 pv_controller.go:860] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:57.715693  112068 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1109 00:53:57.715997  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound
I1109 00:53:57.716009  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound
E1109 00:53:57.716143  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:57.716183  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:57.716205  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 00:53:57.716240  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:57.719085  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.142809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:57.722021  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35749
I1109 00:53:57.722072  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: 9f2e4701-ccb3-4cf5-93bd-9d7cc8c3040d)", boundByController: false
I1109 00:53:57.722083  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound
I1109 00:53:57.722103  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:57.722120  112068 pv_controller.go:604] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1109 00:53:57.722314  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (5.620537ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.722495  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (5.804107ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:57.722580  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35749
I1109 00:53:57.722613  112068 pv_controller.go:796] volume "pv-i-prebound" entered phase "Bound"
I1109 00:53:57.722628  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1109 00:53:57.722645  112068 pv_controller.go:899] volume "pv-i-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:57.725458  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-pv-prebound: (2.583455ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I1109 00:53:57.725744  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" with version 35752
I1109 00:53:57.725782  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I1109 00:53:57.725793  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound] status: set phase Bound
I1109 00:53:57.730701  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-pv-prebound/status: (4.66579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:57.730936  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" with version 35755
I1109 00:53:57.730964  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" entered phase "Bound"
I1109 00:53:57.730983  112068 pv_controller.go:955] volume "pv-i-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:57.731006  112068 pv_controller.go:956] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: 9f2e4701-ccb3-4cf5-93bd-9d7cc8c3040d)", boundByController: false
I1109 00:53:57.731022  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1109 00:53:57.731056  112068 pv_controller_base.go:533] storeObjectUpdate: ignoring claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" version 35752
I1109 00:53:57.731752  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" with version 35755
I1109 00:53:57.731774  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1109 00:53:57.731794  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: 9f2e4701-ccb3-4cf5-93bd-9d7cc8c3040d)", boundByController: false
I1109 00:53:57.731805  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: claim is already correctly bound
I1109 00:53:57.731816  112068 pv_controller.go:929] binding volume "pv-i-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:57.731827  112068 pv_controller.go:827] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:57.731856  112068 pv_controller.go:839] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:57.731866  112068 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1109 00:53:57.731875  112068 pv_controller.go:778] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I1109 00:53:57.731884  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1109 00:53:57.731903  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I1109 00:53:57.731913  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound] status: set phase Bound
I1109 00:53:57.731934  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound] status: phase Bound already set
I1109 00:53:57.731947  112068 pv_controller.go:955] volume "pv-i-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound"
I1109 00:53:57.731967  112068 pv_controller.go:956] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: 9f2e4701-ccb3-4cf5-93bd-9d7cc8c3040d)", boundByController: false
I1109 00:53:57.731985  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1109 00:53:57.765136  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.876625ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:57.864429  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.133949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:57.963319  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.113146ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.063937  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.811721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.162976  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.826975ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.264581  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.643137ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.364284  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.081527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.463550  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.197339ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.564609  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.432186ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.666079  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (4.766329ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.764935  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.742219ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.864359  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (3.07302ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:58.965179  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.757536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.063721  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.666995ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.163129  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.976349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.262673  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.521116ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.362909  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (1.729539ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.463495  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.290161ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.518628  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound
I1109 00:53:59.518660  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound
I1109 00:53:59.518856  112068 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound" match with Node "node-1"
I1109 00:53:59.518898  112068 scheduler_binder.go:653] PersistentVolume "pv-i-prebound", Node "node-2" mismatch for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound": No matching NodeSelectorTerms
I1109 00:53:59.518956  112068 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound", node "node-1"
I1109 00:53:59.518968  112068 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound", node "node-1": all PVCs bound and nothing to do
I1109 00:53:59.519035  112068 factory.go:698] Attempting to bind pod-i-pv-prebound to node-1
I1109 00:53:59.520633  112068 cache.go:656] Couldn't expire cache for pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound. Binding is still in progress.
I1109 00:53:59.522602  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound/binding: (3.124518ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.522862  112068 scheduler.go:756] pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 00:53:59.525719  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.552529ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.564161  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pv-prebound: (2.993001ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.566054  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-pv-prebound: (1.372932ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.567819  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.324416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.574418  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (6.11774ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.579327  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (3.939633ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.579661  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" deleted
I1109 00:53:59.579703  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35749
I1109 00:53:59.579741  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: 9f2e4701-ccb3-4cf5-93bd-9d7cc8c3040d)", boundByController: false
I1109 00:53:59.579752  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound
I1109 00:53:59.579772  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound not found
I1109 00:53:59.579787  112068 pv_controller.go:573] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I1109 00:53:59.579797  112068 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Released
I1109 00:53:59.582956  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.842538ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:59.583195  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 36269
I1109 00:53:59.583237  112068 pv_controller.go:796] volume "pv-i-prebound" entered phase "Released"
I1109 00:53:59.583249  112068 pv_controller.go:1009] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1109 00:53:59.583272  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 36269
I1109 00:53:59.583296  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Released, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound (uid: 9f2e4701-ccb3-4cf5-93bd-9d7cc8c3040d)", boundByController: false
I1109 00:53:59.583308  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound
I1109 00:53:59.583328  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound not found
I1109 00:53:59.583334  112068 pv_controller.go:1009] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1109 00:53:59.585667  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (5.603854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.586044  112068 pv_controller_base.go:216] volume "pv-i-prebound" deleted
I1109 00:53:59.586086  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-pv-prebound" was already processed
I1109 00:53:59.592512  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.489794ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.592684  112068 volume_binding_test.go:191] Running test mix immediate and wait
I1109 00:53:59.594371  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.448927ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.596462  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.680491ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.598298  112068 httplog.go:90] POST /api/v1/persistentvolumes: (1.468273ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.598609  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-4", version 36275
I1109 00:53:59.598644  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Pending, bound to: "", boundByController: false
I1109 00:53:59.598665  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1109 00:53:59.598673  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1109 00:53:59.600633  112068 httplog.go:90] POST /api/v1/persistentvolumes: (1.870224ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.603040  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (4.159218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:59.603252  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36277
I1109 00:53:59.603283  112068 pv_controller.go:796] volume "pv-w-canbind-4" entered phase "Available"
I1109 00:53:59.603311  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind-2", version 36276
I1109 00:53:59.603326  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1109 00:53:59.603344  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1109 00:53:59.603350  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1109 00:53:59.603719  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (2.547963ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.603843  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4", version 36279
I1109 00:53:59.603912  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:59.603949  112068 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: no volume found
I1109 00:53:59.603999  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4] status: set phase Pending
I1109 00:53:59.604020  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4] status: phase Pending already set
I1109 00:53:59.604179  112068 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2", Name:"pvc-w-canbind-4", UID:"1a1a3974-469a-4260-9001-d385288b0256", APIVersion:"v1", ResourceVersion:"36279", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 00:53:59.605388  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (1.80518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:59.605623  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36280
I1109 00:53:59.605650  112068 pv_controller.go:796] volume "pv-i-canbind-2" entered phase "Available"
I1109 00:53:59.605674  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36277
I1109 00:53:59.605690  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "", boundByController: false
I1109 00:53:59.605709  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1109 00:53:59.605716  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1109 00:53:59.605724  112068 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-4]: phase Available already set
I1109 00:53:59.605735  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36280
I1109 00:53:59.605746  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "", boundByController: false
I1109 00:53:59.605763  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1109 00:53:59.605768  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1109 00:53:59.605775  112068 pv_controller.go:778] updating PersistentVolume[pv-i-canbind-2]: phase Available already set
I1109 00:53:59.606189  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (1.848741ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.606569  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2", version 36281
I1109 00:53:59.606593  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:59.606623  112068 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Available, bound to: "", boundByController: false
I1109 00:53:59.606643  112068 pv_controller.go:929] binding volume "pv-i-canbind-2" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2"
I1109 00:53:59.606661  112068 pv_controller.go:827] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2"
I1109 00:53:59.606684  112068 pv_controller.go:847] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2" bound to volume "pv-i-canbind-2"
I1109 00:53:59.607894  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (3.2693ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48646]
I1109 00:53:59.608652  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (1.872857ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.609492  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound
I1109 00:53:59.609514  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound
E1109 00:53:59.609656  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:59.609680  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound: error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:59.609700  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
I1109 00:53:59.609735  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2: (2.745066ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:59.609947  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36284
I1109 00:53:59.609975  112068 pv_controller.go:860] updating PersistentVolume[pv-i-canbind-2]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2"
I1109 00:53:59.610025  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1109 00:53:59.610096  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36284
I1109 00:53:59.610185  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2 (uid: 145e0d7a-2a90-4b68-bb0d-4db1324174f8)", boundByController: true
I1109 00:53:59.610578  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2
I1109 00:53:59.610705  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:59.610823  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:53:59.611270  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.167362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:59.611864  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound/status: (1.703082ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
E1109 00:53:59.612100  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:59.612329  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound
I1109 00:53:59.612432  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound
I1109 00:53:59.612375  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (2.09841ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48646]
I1109 00:53:59.612706  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36286
I1109 00:53:59.612739  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2 (uid: 145e0d7a-2a90-4b68-bb0d-4db1324174f8)", boundByController: true
I1109 00:53:59.612751  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2
I1109 00:53:59.612767  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:53:59.612782  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
E1109 00:53:59.612817  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:59.612935  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36286
I1109 00:53:59.612979  112068 pv_controller.go:796] volume "pv-i-canbind-2" entered phase "Bound"
I1109 00:53:59.612995  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1109 00:53:59.613013  112068 pv_controller.go:899] volume "pv-i-canbind-2" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2"
E1109 00:53:59.613275  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
E1109 00:53:59.613335  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound: error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:53:59.613360  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
E1109 00:53:59.613374  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
I1109 00:53:59.613570  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.081503ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48648]
I1109 00:53:59.616520  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.963165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:59.616771  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.843138ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48648]
I1109 00:53:59.617618  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-canbind-2: (4.049896ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I1109 00:53:59.617869  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2" with version 36289
I1109 00:53:59.617900  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2]: bound to "pv-i-canbind-2"
I1109 00:53:59.617911  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2] status: set phase Bound
I1109 00:53:59.623202  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-canbind-2/status: (5.031057ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:59.623650  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2" with version 36290
I1109 00:53:59.623678  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2" entered phase "Bound"
I1109 00:53:59.623697  112068 pv_controller.go:955] volume "pv-i-canbind-2" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2"
I1109 00:53:59.623725  112068 pv_controller.go:956] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2 (uid: 145e0d7a-2a90-4b68-bb0d-4db1324174f8)", boundByController: true
I1109 00:53:59.623740  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1109 00:53:59.623773  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2" with version 36290
I1109 00:53:59.623787  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2]: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1109 00:53:59.623801  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2 (uid: 145e0d7a-2a90-4b68-bb0d-4db1324174f8)", boundByController: true
I1109 00:53:59.623811  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2]: claim is already correctly bound
I1109 00:53:59.623821  112068 pv_controller.go:929] binding volume "pv-i-canbind-2" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2"
I1109 00:53:59.623893  112068 pv_controller.go:827] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2"
I1109 00:53:59.623919  112068 pv_controller.go:839] updating PersistentVolume[pv-i-canbind-2]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2"
I1109 00:53:59.623929  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1109 00:53:59.623940  112068 pv_controller.go:778] updating PersistentVolume[pv-i-canbind-2]: phase Bound already set
I1109 00:53:59.623949  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1109 00:53:59.623968  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2]: already bound to "pv-i-canbind-2"
I1109 00:53:59.623977  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2] status: set phase Bound
I1109 00:53:59.623997  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2] status: phase Bound already set
I1109 00:53:59.624009  112068 pv_controller.go:955] volume "pv-i-canbind-2" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2"
I1109 00:53:59.624027  112068 pv_controller.go:956] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2 (uid: 145e0d7a-2a90-4b68-bb0d-4db1324174f8)", boundByController: true
I1109 00:53:59.624040  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1109 00:53:59.711258  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.82405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:59.811568  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.165844ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:53:59.911256  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.932667ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.011267  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.869998ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.111367  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.055759ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.210850  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.568304ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.311158  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.761944ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.411081  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.739583ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.512355  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (3.034133ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.630895  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (21.642467ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.711718  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.838334ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.811261  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.832718ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:00.912582  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.836574ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.013227  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.735205ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.111525  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.104144ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.211114  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.742551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.312721  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.963796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.411164  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.512137ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.511022  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.544671ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.518916  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound
I1109 00:54:01.518960  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound
I1109 00:54:01.519225  112068 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound" match with Node "node-1"
I1109 00:54:01.519277  112068 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound" on node "node-1"
I1109 00:54:01.519361  112068 scheduler_binder.go:653] PersistentVolume "pv-i-canbind-2", Node "node-2" mismatch for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound": No matching NodeSelectorTerms
I1109 00:54:01.519393  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" on node "node-2"
I1109 00:54:01.519407  112068 scheduler_binder.go:725] storage class "wait-7zl5" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" does not support dynamic provisioning
I1109 00:54:01.519474  112068 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound", node "node-1"
I1109 00:54:01.519520  112068 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-4", version 36277
I1109 00:54:01.519586  112068 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound", node "node-1"
I1109 00:54:01.519606  112068 scheduler_binder.go:404] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" bound to volume "pv-w-canbind-4"
I1109 00:54:01.520932  112068 cache.go:656] Couldn't expire cache for pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound. Binding is still in progress.
I1109 00:54:01.523397  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4: (3.312711ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.524041  112068 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-4]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.524263  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36753
I1109 00:54:01.524302  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 (uid: 1a1a3974-469a-4260-9001-d385288b0256)", boundByController: true
I1109 00:54:01.524313  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4
I1109 00:54:01.524332  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:01.524355  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:54:01.524384  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" with version 36279
I1109 00:54:01.524397  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:01.524429  112068 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 (uid: 1a1a3974-469a-4260-9001-d385288b0256)", boundByController: true
I1109 00:54:01.524444  112068 pv_controller.go:929] binding volume "pv-w-canbind-4" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.524459  112068 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.524474  112068 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.524485  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1109 00:54:01.529096  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36754
I1109 00:54:01.529143  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 (uid: 1a1a3974-469a-4260-9001-d385288b0256)", boundByController: true
I1109 00:54:01.529162  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4
I1109 00:54:01.529181  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:01.529196  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:54:01.529537  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (4.802408ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.529771  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36754
I1109 00:54:01.529801  112068 pv_controller.go:796] volume "pv-w-canbind-4" entered phase "Bound"
I1109 00:54:01.529815  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1109 00:54:01.529838  112068 pv_controller.go:899] volume "pv-w-canbind-4" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.532745  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-4: (2.682657ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.532997  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" with version 36755
I1109 00:54:01.533031  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: bound to "pv-w-canbind-4"
I1109 00:54:01.533042  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4] status: set phase Bound
I1109 00:54:01.535522  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-4/status: (2.228744ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.535726  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" with version 36756
I1109 00:54:01.535759  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" entered phase "Bound"
I1109 00:54:01.535780  112068 pv_controller.go:955] volume "pv-w-canbind-4" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.535805  112068 pv_controller.go:956] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 (uid: 1a1a3974-469a-4260-9001-d385288b0256)", boundByController: true
I1109 00:54:01.535835  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1109 00:54:01.535866  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" with version 36756
I1109 00:54:01.535879  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1109 00:54:01.535900  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 (uid: 1a1a3974-469a-4260-9001-d385288b0256)", boundByController: true
I1109 00:54:01.535910  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: claim is already correctly bound
I1109 00:54:01.535920  112068 pv_controller.go:929] binding volume "pv-w-canbind-4" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.535930  112068 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.535949  112068 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.535963  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1109 00:54:01.535972  112068 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-4]: phase Bound already set
I1109 00:54:01.535981  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1109 00:54:01.535999  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4]: already bound to "pv-w-canbind-4"
I1109 00:54:01.536012  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4] status: set phase Bound
I1109 00:54:01.536029  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4] status: phase Bound already set
I1109 00:54:01.536041  112068 pv_controller.go:955] volume "pv-w-canbind-4" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4"
I1109 00:54:01.536059  112068 pv_controller.go:956] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 (uid: 1a1a3974-469a-4260-9001-d385288b0256)", boundByController: true
I1109 00:54:01.536072  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1109 00:54:01.611354  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.0206ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.713571  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (3.949576ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.813848  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (4.261399ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:01.915420  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.690167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.011045  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.55478ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.114905  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (5.595138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.211075  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.77512ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.311000  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (1.693133ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.411694  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.227048ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.511477  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.107621ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.521132  112068 cache.go:656] Couldn't expire cache for pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound. Binding is still in progress.
I1109 00:54:02.524385  112068 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound" are bound
I1109 00:54:02.524459  112068 factory.go:698] Attempting to bind pod-mix-bound to node-1
I1109 00:54:02.527467  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound/binding: (2.618564ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.527805  112068 scheduler.go:756] pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-mix-bound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 00:54:02.533915  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (5.796428ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.542602  112068 httplog.go:90] GET /api/v1/namespaces/default: (1.855152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.544495  112068 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.361493ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.546108  112068 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.287158ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.611380  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-mix-bound: (2.065197ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.613803  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-4: (1.919029ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.619733  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-canbind-2: (5.572159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.621598  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-4: (1.403444ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.626610  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind-2: (4.540613ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.634750  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (7.49678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.640064  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2" deleted
I1109 00:54:02.640112  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36286
I1109 00:54:02.640149  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2 (uid: 145e0d7a-2a90-4b68-bb0d-4db1324174f8)", boundByController: true
I1109 00:54:02.640168  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2
I1109 00:54:02.642087  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-canbind-2: (1.643173ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48650]
I1109 00:54:02.642372  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2 not found
I1109 00:54:02.642404  112068 pv_controller.go:573] volume "pv-i-canbind-2" is released and reclaim policy "Retain" will be executed
I1109 00:54:02.642416  112068 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Released
I1109 00:54:02.643769  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (8.104562ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.644522  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" deleted
I1109 00:54:02.646167  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (3.491375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48650]
I1109 00:54:02.646398  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36961
I1109 00:54:02.646430  112068 pv_controller.go:796] volume "pv-i-canbind-2" entered phase "Released"
I1109 00:54:02.646442  112068 pv_controller.go:1009] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I1109 00:54:02.646465  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36754
I1109 00:54:02.646488  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 (uid: 1a1a3974-469a-4260-9001-d385288b0256)", boundByController: true
I1109 00:54:02.646515  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4
I1109 00:54:02.650256  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind-4: (3.579612ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48650]
I1109 00:54:02.650466  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 not found
I1109 00:54:02.650545  112068 pv_controller.go:573] volume "pv-w-canbind-4" is released and reclaim policy "Retain" will be executed
I1109 00:54:02.650633  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Released
I1109 00:54:02.655143  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (4.219615ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48650]
I1109 00:54:02.655496  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36965
I1109 00:54:02.655607  112068 pv_controller.go:796] volume "pv-w-canbind-4" entered phase "Released"
I1109 00:54:02.655666  112068 pv_controller.go:1009] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1109 00:54:02.655736  112068 pv_controller_base.go:216] volume "pv-i-canbind-2" deleted
I1109 00:54:02.655832  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36965
I1109 00:54:02.655947  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Released, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 (uid: 1a1a3974-469a-4260-9001-d385288b0256)", boundByController: true
I1109 00:54:02.656041  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4
I1109 00:54:02.656139  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4 not found
I1109 00:54:02.656235  112068 pv_controller.go:1009] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1109 00:54:02.656155  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-canbind-2" was already processed
I1109 00:54:02.656429  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (11.773206ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.656758  112068 pv_controller_base.go:216] volume "pv-w-canbind-4" deleted
I1109 00:54:02.656795  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind-4" was already processed
I1109 00:54:02.665921  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (9.177682ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.666250  112068 volume_binding_test.go:191] Running test immediate pvc prebound
I1109 00:54:02.671293  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.032599ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.673773  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.080041ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.675853  112068 httplog.go:90] POST /api/v1/persistentvolumes: (1.726774ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.676100  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-pvc-prebound", version 36974
I1109 00:54:02.676138  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1109 00:54:02.676158  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1109 00:54:02.676167  112068 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1109 00:54:02.678309  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (1.859175ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48650]
I1109 00:54:02.678590  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound", version 36976
I1109 00:54:02.678611  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:54:02.678644  112068 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1109 00:54:02.678660  112068 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I1109 00:54:02.678676  112068 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: volume is unbound, binding
I1109 00:54:02.678692  112068 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:02.678702  112068 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:02.678722  112068 pv_controller.go:847] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I1109 00:54:02.681986  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound
I1109 00:54:02.682005  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound
E1109 00:54:02.682157  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 00:54:02.682183  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:54:02.682202  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1109 00:54:02.682503  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (3.796944ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48650]
I1109 00:54:02.682910  112068 store.go:365] GuaranteedUpdate of /cc08644e-00cc-4109-8d71-8e808dc3f283/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I1109 00:54:02.684190  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (7.643609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.684376  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.54609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49574]
I1109 00:54:02.684478  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 36977
I1109 00:54:02.684503  112068 pv_controller.go:796] volume "pv-i-pvc-prebound" entered phase "Available"
I1109 00:54:02.684529  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 36977
I1109 00:54:02.684544  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1109 00:54:02.684567  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1109 00:54:02.684573  112068 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1109 00:54:02.684582  112068 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I1109 00:54:02.685526  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (6.172137ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I1109 00:54:02.685832  112068 pv_controller.go:850] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:54:02.685857  112068 pv_controller.go:932] error binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:54:02.685878  112068 pv_controller_base.go:251] could not sync claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:54:02.686488  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound/status: (2.476328ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48650]
E1109 00:54:02.686759  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:54:02.686894  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (1.897311ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49576]
I1109 00:54:02.784785  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.461908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.885249  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.792676ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:02.985202  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.850985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.085277  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.003713ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.185179  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.793207ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.285229  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.858916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.385368  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.901746ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.485158  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.744571ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.588264  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (5.036541ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.686470  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (3.0701ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.786254  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.890073ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.888122  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.755584ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:03.984962  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.667296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.086116  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.811798ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.184990  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.671492ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.285488  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.134144ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.385232  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.564014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.485063  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.690389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.585697  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.406067ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.688459  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (5.07271ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.785250  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.937343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.887929  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.553995ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:04.985201  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.887132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.085131  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.743824ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.185321  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.771011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.285472  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.034972ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.385161  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.821275ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.485423  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.926244ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.584982  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.631717ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.685622  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.256633ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.785890  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.472145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.887448  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (3.801791ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:05.985695  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.326028ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.084806  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.522925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.184843  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.517256ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.284935  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.600339ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.385022  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.751963ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.485528  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.203021ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.584861  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.52612ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.684727  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.474653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.785737  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.39562ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.884865  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.584074ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:06.984883  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.63999ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.084984  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.739806ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.184734  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.447397ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.285186  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.836537ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.385881  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.43424ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.485575  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.09318ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.586089  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.783215ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.685345  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.93324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.785165  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.877326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.916243  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (32.890792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:07.985188  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.876241ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.087947  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.631816ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.185648  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.250527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.285159  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.854939ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.385861  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.47255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.485723  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.328534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.585743  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.293353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.685867  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.205188ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.784813  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.454534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.887630  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.220708ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:08.986385  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (3.160327ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.085329  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.997026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.185723  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.17725ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.227754  112068 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.929818ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.230003  112068 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.779754ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.231770  112068 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.337313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.285570  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.227798ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.385347  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.990406ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.485301  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.730729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.586562  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (3.36929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.685440  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.033527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.785289  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.909041ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.885253  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.853146ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:09.985164  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.736749ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.085444  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.071865ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.185514  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.067945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.284862  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.604686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.388483  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.430408ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.484999  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.714764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.587072  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (3.42605ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.686850  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (3.510974ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.784914  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.526653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.885430  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.112478ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:10.987820  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.501083ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.088646  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.986847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.185377  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.92232ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.285268  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.7388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.387975  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.56181ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.485415  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.040653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.594415  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (10.977255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.685723  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.297205ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.788736  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.138968ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.885372  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.045846ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:11.985370  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.058189ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.088965  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (5.678224ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.185014  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.705115ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.286899  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (3.519219ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.385044  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.727556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.484886  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.66404ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.544576  112068 httplog.go:90] GET /api/v1/namespaces/default: (3.037135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.546480  112068 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.473075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.548112  112068 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.235516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.587256  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (3.034137ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.687312  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.0063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.710696  112068 pv_controller_base.go:426] resyncing PV controller
I1109 00:54:12.710809  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" with version 36976
I1109 00:54:12.710843  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:54:12.710840  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 36977
I1109 00:54:12.710863  112068 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1109 00:54:12.710884  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1109 00:54:12.710899  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1109 00:54:12.710906  112068 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1109 00:54:12.710912  112068 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I1109 00:54:12.710929  112068 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1109 00:54:12.710942  112068 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: volume is unbound, binding
I1109 00:54:12.710957  112068 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:12.710970  112068 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:12.711004  112068 pv_controller.go:847] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I1109 00:54:12.714604  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (3.208658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.715142  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound
I1109 00:54:12.715164  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound
I1109 00:54:12.715162  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38807
I1109 00:54:12.715192  112068 pv_controller.go:860] updating PersistentVolume[pv-i-pvc-prebound]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:12.715206  112068 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1109 00:54:12.715326  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38807
E1109 00:54:12.715422  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:54:12.715362  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound (uid: 55021617-6d3f-4aed-9619-750f2a509da9)", boundByController: true
I1109 00:54:12.715462  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound
I1109 00:54:12.715482  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:54:12.715495  112068 pv_controller.go:617] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1109 00:54:12.715505  112068 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
E1109 00:54:12.715598  112068 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 00:54:12.715663  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 00:54:12.715695  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 00:54:12.715713  112068 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 00:54:12.717468  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.311519ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:12.718127  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (1.944553ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52716]
I1109 00:54:12.718881  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (3.332849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47728]
I1109 00:54:12.719153  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38809
I1109 00:54:12.719180  112068 pv_controller.go:796] volume "pv-i-pvc-prebound" entered phase "Bound"
I1109 00:54:12.719197  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1109 00:54:12.719236  112068 pv_controller.go:899] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:12.721641  112068 store.go:365] GuaranteedUpdate of /cc08644e-00cc-4109-8d71-8e808dc3f283/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I1109 00:54:12.721938  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (6.176246ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:12.722200  112068 pv_controller.go:788] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:54:12.722255  112068 pv_controller_base.go:204] could not sync volume "pv-i-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:54:12.722313  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38809
I1109 00:54:12.722573  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound (uid: 55021617-6d3f-4aed-9619-750f2a509da9)", boundByController: true
I1109 00:54:12.722632  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-prebound: (3.165196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52716]
I1109 00:54:12.722661  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound
I1109 00:54:12.722927  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1109 00:54:12.722998  112068 pv_controller.go:617] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1109 00:54:12.723065  112068 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1109 00:54:12.723113  112068 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1109 00:54:12.723342  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" with version 38810
I1109 00:54:12.723369  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: bound to "pv-i-pvc-prebound"
I1109 00:54:12.723379  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound] status: set phase Bound
I1109 00:54:12.725674  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-prebound/status: (2.035417ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:12.725973  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" with version 38811
I1109 00:54:12.726062  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" entered phase "Bound"
I1109 00:54:12.726101  112068 pv_controller.go:955] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:12.726130  112068 pv_controller.go:956] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound (uid: 55021617-6d3f-4aed-9619-750f2a509da9)", boundByController: true
I1109 00:54:12.726153  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1109 00:54:12.726241  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" with version 38811
I1109 00:54:12.726264  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1109 00:54:12.726299  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: volume "pv-i-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound (uid: 55021617-6d3f-4aed-9619-750f2a509da9)", boundByController: true
I1109 00:54:12.726308  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: claim is already correctly bound
I1109 00:54:12.726318  112068 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:12.726328  112068 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:12.726362  112068 pv_controller.go:839] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:12.726373  112068 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1109 00:54:12.726379  112068 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1109 00:54:12.726386  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1109 00:54:12.726419  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound]: already bound to "pv-i-pvc-prebound"
I1109 00:54:12.726427  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound] status: set phase Bound
I1109 00:54:12.726440  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound] status: phase Bound already set
I1109 00:54:12.726448  112068 pv_controller.go:955] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound"
I1109 00:54:12.726461  112068 pv_controller.go:956] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound (uid: 55021617-6d3f-4aed-9619-750f2a509da9)", boundByController: true
I1109 00:54:12.726486  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1109 00:54:12.785481  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.165805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:12.885186  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.862334ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:12.988670  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (5.326011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.085618  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.287235ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.185115  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.718553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.285280  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.843127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.385266  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.840092ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.486650  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (3.217169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.585858  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.460623ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.685416  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (2.023388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.784864  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.500786ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.885335  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.932138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:13.985329  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.871727ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.088140  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (4.85613ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.189888  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.820422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.285184  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.812675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.387652  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.994122ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.485233  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (1.908013ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.522551  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound
I1109 00:54:14.522582  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound
I1109 00:54:14.522796  112068 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound" match with Node "node-1"
I1109 00:54:14.522859  112068 scheduler_binder.go:653] PersistentVolume "pv-i-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound": No matching NodeSelectorTerms
I1109 00:54:14.522933  112068 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound", node "node-1"
I1109 00:54:14.522955  112068 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I1109 00:54:14.523026  112068 factory.go:698] Attempting to bind pod-i-pvc-prebound to node-1
I1109 00:54:14.523448  112068 cache.go:656] Couldn't expire cache for pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound. Binding is still in progress.
I1109 00:54:14.525759  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound/binding: (2.157635ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.526007  112068 scheduler.go:756] pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-i-pvc-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 00:54:14.530369  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (3.997804ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.588424  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-i-pvc-prebound: (5.121325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.592759  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-prebound: (3.734803ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.595427  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-pvc-prebound: (1.600291ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.609582  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (13.218765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.621990  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" deleted
I1109 00:54:14.622034  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 38809
I1109 00:54:14.622071  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound (uid: 55021617-6d3f-4aed-9619-750f2a509da9)", boundByController: true
I1109 00:54:14.622093  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound
I1109 00:54:14.622650  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (10.217657ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.625685  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-i-prebound: (2.602909ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:14.625972  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound not found
I1109 00:54:14.626005  112068 pv_controller.go:573] volume "pv-i-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1109 00:54:14.626018  112068 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released
I1109 00:54:14.629466  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (3.093112ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:14.629720  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 39371
I1109 00:54:14.629759  112068 pv_controller.go:796] volume "pv-i-pvc-prebound" entered phase "Released"
I1109 00:54:14.629772  112068 pv_controller.go:1009] reclaimVolume[pv-i-pvc-prebound]: policy is Retain, nothing to do
I1109 00:54:14.630951  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 39371
I1109 00:54:14.630999  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Released, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound (uid: 55021617-6d3f-4aed-9619-750f2a509da9)", boundByController: true
I1109 00:54:14.631012  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound
I1109 00:54:14.631033  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound not found
I1109 00:54:14.631047  112068 pv_controller.go:1009] reclaimVolume[pv-i-pvc-prebound]: policy is Retain, nothing to do
I1109 00:54:14.631454  112068 store.go:231] deletion of /cc08644e-00cc-4109-8d71-8e808dc3f283/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I1109 00:54:14.637952  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (14.3436ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.638459  112068 pv_controller_base.go:216] volume "pv-i-pvc-prebound" deleted
I1109 00:54:14.638502  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-i-prebound" was already processed
I1109 00:54:14.652405  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (13.70249ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.652640  112068 volume_binding_test.go:191] Running test wait can bind
I1109 00:54:14.655139  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.273876ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.657078  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.530365ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.660163  112068 httplog.go:90] POST /api/v1/persistentvolumes: (2.151082ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.660419  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind", version 39386
I1109 00:54:14.660453  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I1109 00:54:14.660476  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1109 00:54:14.660486  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Available
I1109 00:54:14.662766  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (2.199645ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.663548  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (2.819767ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:14.663773  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind", version 39388
I1109 00:54:14.663799  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:14.663833  112068 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: no volume found
I1109 00:54:14.663853  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind] status: set phase Pending
I1109 00:54:14.663871  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind] status: phase Pending already set
I1109 00:54:14.663879  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 39389
I1109 00:54:14.663973  112068 pv_controller.go:796] volume "pv-w-canbind" entered phase "Available"
I1109 00:54:14.664091  112068 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2", Name:"pvc-w-canbind", UID:"d4d3ad37-ed83-428d-8239-455ab388ace0", APIVersion:"v1", ResourceVersion:"39388", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 00:54:14.664316  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 39389
I1109 00:54:14.664351  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I1109 00:54:14.664370  112068 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1109 00:54:14.664378  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Available
I1109 00:54:14.664388  112068 pv_controller.go:778] updating PersistentVolume[pv-w-canbind]: phase Available already set
I1109 00:54:14.666155  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (2.097486ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.666492  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind
I1109 00:54:14.666516  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind
I1109 00:54:14.666732  112068 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind" on node "node-1"
I1109 00:54:14.666775  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" on node "node-2"
I1109 00:54:14.666860  112068 scheduler_binder.go:725] storage class "wait-knvb" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" does not support dynamic provisioning
I1109 00:54:14.666948  112068 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind", node "node-1"
I1109 00:54:14.666993  112068 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind", version 39389
I1109 00:54:14.667072  112068 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind", node "node-1"
I1109 00:54:14.667110  112068 scheduler_binder.go:404] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" bound to volume "pv-w-canbind"
I1109 00:54:14.670751  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind: (3.294918ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:14.671547  112068 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.671717  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 39393
I1109 00:54:14.671963  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind (uid: d4d3ad37-ed83-428d-8239-455ab388ace0)", boundByController: true
I1109 00:54:14.671985  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind
I1109 00:54:14.672085  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:14.672128  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:54:14.672168  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" with version 39388
I1109 00:54:14.672312  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:14.672326  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (7.587534ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:14.672346  112068 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind (uid: d4d3ad37-ed83-428d-8239-455ab388ace0)", boundByController: true
I1109 00:54:14.672356  112068 pv_controller.go:929] binding volume "pv-w-canbind" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.672437  112068 pv_controller.go:827] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.672504  112068 pv_controller.go:839] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.672514  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1109 00:54:14.675354  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (2.349969ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:14.675634  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 39395
I1109 00:54:14.675687  112068 pv_controller.go:796] volume "pv-w-canbind" entered phase "Bound"
I1109 00:54:14.675705  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: binding to "pv-w-canbind"
I1109 00:54:14.675641  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 39395
I1109 00:54:14.675815  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind (uid: d4d3ad37-ed83-428d-8239-455ab388ace0)", boundByController: true
I1109 00:54:14.675855  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind
I1109 00:54:14.675727  112068 pv_controller.go:899] volume "pv-w-canbind" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.675994  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:14.676070  112068 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1109 00:54:14.681823  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind: (5.609028ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:14.682500  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" with version 39397
I1109 00:54:14.682529  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: bound to "pv-w-canbind"
I1109 00:54:14.682541  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind] status: set phase Bound
I1109 00:54:14.685699  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind/status: (2.843706ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:14.686317  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" with version 39400
I1109 00:54:14.686354  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" entered phase "Bound"
I1109 00:54:14.686373  112068 pv_controller.go:955] volume "pv-w-canbind" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.686399  112068 pv_controller.go:956] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind (uid: d4d3ad37-ed83-428d-8239-455ab388ace0)", boundByController: true
I1109 00:54:14.686417  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1109 00:54:14.686625  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" with version 39400
I1109 00:54:14.686640  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1109 00:54:14.686660  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind (uid: d4d3ad37-ed83-428d-8239-455ab388ace0)", boundByController: true
I1109 00:54:14.686669  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: claim is already correctly bound
I1109 00:54:14.686679  112068 pv_controller.go:929] binding volume "pv-w-canbind" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.686692  112068 pv_controller.go:827] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.686707  112068 pv_controller.go:839] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.686715  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1109 00:54:14.686721  112068 pv_controller.go:778] updating PersistentVolume[pv-w-canbind]: phase Bound already set
I1109 00:54:14.686728  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: binding to "pv-w-canbind"
I1109 00:54:14.686739  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind]: already bound to "pv-w-canbind"
I1109 00:54:14.686745  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind] status: set phase Bound
I1109 00:54:14.686757  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind] status: phase Bound already set
I1109 00:54:14.686765  112068 pv_controller.go:955] volume "pv-w-canbind" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind"
I1109 00:54:14.686778  112068 pv_controller.go:956] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind (uid: d4d3ad37-ed83-428d-8239-455ab388ace0)", boundByController: true
I1109 00:54:14.686787  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1109 00:54:14.768419  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (1.50149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:14.868880  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (1.944957ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:14.968655  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (1.756595ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.071236  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (4.342935ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.168846  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (1.903693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.268459  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (1.541183ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.368772  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (1.834346ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.478944  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (11.912159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.523712  112068 cache.go:656] Couldn't expire cache for pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind. Binding is still in progress.
I1109 00:54:15.569598  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (1.652761ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.668601  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (1.746693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.671851  112068 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind" are bound
I1109 00:54:15.671917  112068 factory.go:698] Attempting to bind pod-w-canbind to node-1
I1109 00:54:15.674810  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind/binding: (2.558912ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.677397  112068 scheduler.go:756] pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 00:54:15.681518  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (3.730697ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.769181  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-canbind: (2.16356ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.771696  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind: (1.67278ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.773578  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.417142ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.781311  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (7.2068ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.789899  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (8.141914ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.790755  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" deleted
I1109 00:54:15.790795  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 39395
I1109 00:54:15.790827  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind (uid: d4d3ad37-ed83-428d-8239-455ab388ace0)", boundByController: true
I1109 00:54:15.790847  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind
I1109 00:54:15.792517  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-canbind: (1.45304ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:15.792725  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind not found
I1109 00:54:15.792746  112068 pv_controller.go:573] volume "pv-w-canbind" is released and reclaim policy "Retain" will be executed
I1109 00:54:15.792758  112068 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Released
I1109 00:54:15.795365  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (4.468018ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.795460  112068 store.go:365] GuaranteedUpdate of /cc08644e-00cc-4109-8d71-8e808dc3f283/persistentvolumes/pv-w-canbind failed because of a conflict, going to retry
I1109 00:54:15.795633  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (2.582294ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:15.795824  112068 pv_controller.go:788] updating PersistentVolume[pv-w-canbind]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-canbind": StorageError: invalid object, Code: 4, Key: /cc08644e-00cc-4109-8d71-8e808dc3f283/persistentvolumes/pv-w-canbind, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 39edb688-332c-4cd1-8e66-8839e2e4329b, UID in object meta: 
I1109 00:54:15.795848  112068 pv_controller_base.go:204] could not sync volume "pv-w-canbind": Operation cannot be fulfilled on persistentvolumes "pv-w-canbind": StorageError: invalid object, Code: 4, Key: /cc08644e-00cc-4109-8d71-8e808dc3f283/persistentvolumes/pv-w-canbind, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 39edb688-332c-4cd1-8e66-8839e2e4329b, UID in object meta: 
I1109 00:54:15.795886  112068 pv_controller_base.go:216] volume "pv-w-canbind" deleted
I1109 00:54:15.795928  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-canbind" was already processed
I1109 00:54:15.803909  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.707586ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.804267  112068 volume_binding_test.go:191] Running test wait pv prebound
I1109 00:54:15.806277  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.751904ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.807885  112068 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.242451ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.810094  112068 httplog.go:90] POST /api/v1/persistentvolumes: (1.781919ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.810506  112068 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-prebound", version 39680
I1109 00:54:15.810547  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: )", boundByController: false
I1109 00:54:15.810555  112068 pv_controller.go:504] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound
I1109 00:54:15.810564  112068 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Available
I1109 00:54:15.812461  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.689529ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:15.812703  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39681
I1109 00:54:15.812760  112068 pv_controller.go:796] volume "pv-w-prebound" entered phase "Available"
I1109 00:54:15.812817  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39681
I1109 00:54:15.812859  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: )", boundByController: false
I1109 00:54:15.812873  112068 pv_controller.go:504] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound
I1109 00:54:15.812880  112068 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Available
I1109 00:54:15.812889  112068 pv_controller.go:778] updating PersistentVolume[pv-w-prebound]: phase Available already set
I1109 00:54:15.813760  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (3.099417ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.814071  112068 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound", version 39682
I1109 00:54:15.814097  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:15.814130  112068 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: )", boundByController: false
I1109 00:54:15.814146  112068 pv_controller.go:929] binding volume "pv-w-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound"
I1109 00:54:15.814165  112068 pv_controller.go:827] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound"
I1109 00:54:15.814185  112068 pv_controller.go:847] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1109 00:54:15.816888  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.285185ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:15.817311  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39683
I1109 00:54:15.817339  112068 pv_controller.go:860] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound"
I1109 00:54:15.817354  112068 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1109 00:54:15.817373  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39683
I1109 00:54:15.817414  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: db9f1c97-cb27-4d61-afb8-41e3c9a9d154)", boundByController: false
I1109 00:54:15.817429  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound
I1109 00:54:15.817447  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:15.817463  112068 pv_controller.go:604] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1109 00:54:15.817943  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (3.285832ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.818294  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound
I1109 00:54:15.818309  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound
I1109 00:54:15.818581  112068 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound", PVC "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" on node "node-2"
I1109 00:54:15.818601  112068 scheduler_binder.go:725] storage class "wait-hhrw" of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" does not support dynamic provisioning
I1109 00:54:15.818896  112068 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound" on node "node-1"
I1109 00:54:15.818983  112068 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound", node "node-1"
I1109 00:54:15.819044  112068 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound", node "node-1"
I1109 00:54:15.819054  112068 scheduler_binder.go:404] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1109 00:54:15.819845  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.189623ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:15.820345  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39685
I1109 00:54:15.820393  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: db9f1c97-cb27-4d61-afb8-41e3c9a9d154)", boundByController: false
I1109 00:54:15.820406  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound
I1109 00:54:15.820434  112068 pv_controller.go:553] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 00:54:15.820451  112068 pv_controller.go:604] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1109 00:54:15.820629  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39685
I1109 00:54:15.820654  112068 pv_controller.go:796] volume "pv-w-prebound" entered phase "Bound"
I1109 00:54:15.820666  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1109 00:54:15.820683  112068 pv_controller.go:899] volume "pv-w-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound"
I1109 00:54:15.820881  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.454801ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.821030  112068 scheduler_binder.go:407] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 00:54:15.821048  112068 scheduler_assume_cache.go:337] Restored v1.PersistentVolume "pv-w-prebound"
I1109 00:54:15.821071  112068 scheduler.go:519] Failed to bind volumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
E1109 00:54:15.821088  112068 factory.go:648] Error scheduling volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again; retrying
I1109 00:54:15.821112  112068 scheduler.go:774] Updating pod condition for volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound to (PodScheduled==False, Reason=VolumeBindingFailed)
I1109 00:54:15.823322  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pv-prebound: (1.299093ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I1109 00:54:15.824954  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (2.392173ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.827299  112068 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound
I1109 00:54:15.827325  112068 scheduler.go:611] Attempting to schedule pod: volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound
I1109 00:54:15.827494  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-pv-prebound: (6.541944ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49572]
I1109 00:54:15.827495  112068 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound" match with Node "node-1"
I1109 00:54:15.827508  112068 scheduler_binder.go:653] PersistentVolume "pv-w-prebound", Node "node-2" mismatch for Pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound": No matching NodeSelectorTerms
I1109 00:54:15.827806  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" with version 39688
I1109 00:54:15.827835  112068 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I1109 00:54:15.827847  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound] status: set phase Bound
I1109 00:54:15.827941  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pv-prebound/status: (6.059588ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.828064  112068 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound", node "node-1"
I1109 00:54:15.828088  112068 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound", node "node-1": all PVCs bound and nothing to do
I1109 00:54:15.828188  112068 factory.go:698] Attempting to bind pod-w-pv-prebound to node-1
I1109 00:54:15.830524  112068 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-pv-prebound/status: (2.451313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.830760  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" with version 39689
I1109 00:54:15.830791  112068 pv_controller.go:740] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" entered phase "Bound"
I1109 00:54:15.830808  112068 pv_controller.go:955] volume "pv-w-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound"
I1109 00:54:15.830827  112068 pv_controller.go:956] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: db9f1c97-cb27-4d61-afb8-41e3c9a9d154)", boundByController: false
I1109 00:54:15.830838  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1109 00:54:15.830862  112068 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" with version 39689
I1109 00:54:15.830870  112068 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1109 00:54:15.830882  112068 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: db9f1c97-cb27-4d61-afb8-41e3c9a9d154)", boundByController: false
I1109 00:54:15.830888  112068 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound]: claim is already correctly bound
I1109 00:54:15.830895  112068 pv_controller.go:929] binding volume "pv-w-prebound" to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound"
I1109 00:54:15.830902  112068 pv_controller.go:827] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound"
I1109 00:54:15.830915  112068 pv_controller.go:839] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound"
I1109 00:54:15.830928  112068 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1109 00:54:15.830934  112068 pv_controller.go:778] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I1109 00:54:15.830941  112068 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1109 00:54:15.830972  112068 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I1109 00:54:15.830980  112068 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound] status: set phase Bound
I1109 00:54:15.830998  112068 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound] status: phase Bound already set
I1109 00:54:15.831006  112068 pv_controller.go:955] volume "pv-w-prebound" bound to claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound"
I1109 00:54:15.831018  112068 pv_controller.go:956] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: db9f1c97-cb27-4d61-afb8-41e3c9a9d154)", boundByController: false
I1109 00:54:15.831027  112068 pv_controller.go:957] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1109 00:54:15.832082  112068 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pv-prebound/binding: (3.644982ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52714]
I1109 00:54:15.832630  112068 scheduler.go:756] pod volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 00:54:15.834575  112068 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/events: (1.63546ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.921654  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods/pod-w-pv-prebound: (2.805831ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.923836  112068 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims/pvc-w-pv-prebound: (1.555523ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.926063  112068 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (1.565306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.934960  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (7.359184ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.939594  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (4.119479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.939614  112068 pv_controller_base.go:265] claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" deleted
I1109 00:54:15.939647  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39685
I1109 00:54:15.939679  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: db9f1c97-cb27-4d61-afb8-41e3c9a9d154)", boundByController: false
I1109 00:54:15.939690  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound
I1109 00:54:15.939710  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound not found
I1109 00:54:15.939724  112068 pv_controller.go:573] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I1109 00:54:15.939732  112068 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Released
I1109 00:54:15.942997  112068 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.958108ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53514]
I1109 00:54:15.943432  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39695
I1109 00:54:15.943457  112068 pv_controller.go:796] volume "pv-w-prebound" entered phase "Released"
I1109 00:54:15.943466  112068 pv_controller.go:1009] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1109 00:54:15.943485  112068 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 39695
I1109 00:54:15.943501  112068 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Released, bound to: "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound (uid: db9f1c97-cb27-4d61-afb8-41e3c9a9d154)", boundByController: false
I1109 00:54:15.943509  112068 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound
I1109 00:54:15.943524  112068 pv_controller.go:545] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound not found
I1109 00:54:15.943530  112068 pv_controller.go:1009] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1109 00:54:15.944629  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (4.350858ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.945791  112068 pv_controller_base.go:216] volume "pv-w-prebound" deleted
I1109 00:54:15.945835  112068 pv_controller_base.go:403] deletion of claim "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pvc-w-pv-prebound" was already processed
I1109 00:54:15.952109  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.059171ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.952540  112068 volume_binding_test.go:920] test cluster "volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2" start to tear down
I1109 00:54:15.954405  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pods: (1.613886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.955829  112068 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/persistentvolumeclaims: (1.108307ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.957436  112068 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.192956ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.958771  112068 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (990.24µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.959587  112068 pv_controller_base.go:305] Shutting down persistent volume controller
I1109 00:54:15.959670  112068 pv_controller_base.go:416] claim worker queue shutting down
I1109 00:54:15.959943  112068 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=31108&timeout=7m45s&timeoutSeconds=465&watch=true: (1m3.342339384s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35420]
I1109 00:54:15.960070  112068 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=31104&timeout=8m42s&timeoutSeconds=522&watch=true: (1m3.343331858s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35414]
I1109 00:54:15.960250  112068 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=31104&timeout=7m29s&timeoutSeconds=449&watch=true: (1m3.425876213s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35400]
I1109 00:54:15.960577  112068 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=31109&timeout=9m45s&timeoutSeconds=585&watch=true: (1m3.344321255s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35416]
I1109 00:54:15.960668  112068 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=31109&timeout=7m28s&timeoutSeconds=448&watch=true: (1m3.44093562s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35392]
I1109 00:54:15.960396  112068 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=31104&timeout=9m32s&timeoutSeconds=572&watch=true: (1m3.424676613s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35394]
I1109 00:54:15.960472  112068 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=31104&timeout=7m24s&timeoutSeconds=444&watch=true: (1m3.346765371s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35384]
I1109 00:54:15.961031  112068 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=31109&timeout=8m33s&timeoutSeconds=513&watch=true: (1m3.437541527s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35386]
I1109 00:54:15.961073  112068 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=31109&timeout=9m45s&timeoutSeconds=585&watch=true: (1m3.416505944s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35404]
I1109 00:54:15.960773  112068 pv_controller_base.go:359] volume worker queue shutting down
I1109 00:54:15.960835  112068 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=31109&timeout=7m2s&timeoutSeconds=422&watch=true: (1m3.4247777s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34868]
I1109 00:54:15.960993  112068 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=31109&timeout=7m15s&timeoutSeconds=435&watch=true: (1m3.420835202s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35406]
I1109 00:54:15.961179  112068 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=31109&timeout=7m40s&timeoutSeconds=460&watch=true: (1m3.405005161s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35410]
I1109 00:54:15.961270  112068 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=31110&timeout=8m45s&timeoutSeconds=525&watch=true: (1m3.411509107s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35408]
I1109 00:54:15.961352  112068 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=31108&timeout=6m26s&timeoutSeconds=386&watch=true: (1m3.426175105s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35402]
I1109 00:54:15.961383  112068 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=31109&timeout=7m32s&timeoutSeconds=452&watch=true: (1m3.344714461s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35418]
I1109 00:54:15.961362  112068 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=31109&timeout=9m52s&timeoutSeconds=592&watch=true: (1m3.424020728s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35388]
I1109 00:54:15.966803  112068 httplog.go:90] DELETE /api/v1/nodes: (7.582998ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.967166  112068 controller.go:180] Shutting down kubernetes service endpoint reconciler
I1109 00:54:15.969026  112068 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.427553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.971638  112068 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.95922ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53516]
I1109 00:54:15.971962  112068 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I1109 00:54:15.972166  112068 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=31104&timeout=5m47s&timeoutSeconds=347&watch=true: (1m6.777404604s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34862]
--- FAIL: TestVolumeBinding (67.03s)
    volume_binding_test.go:243: Failed to schedule Pod "pod-w-pvc-prebound": timed out waiting for the condition

				from junit_99844db6e586a0ff1ded59c41b65ce7fe8e8a77e_20191109-004555.xml

Find volume-scheduling-eca484e0-c144-4f6c-9e37-ca9f00e396c2/pod-w-cannotbind mentions in log files | View test history on testgrid


Show 2898 Passed Tests