This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2894 succeeded
Started2019-11-08 02:10
Elapsed25m25s
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/f034a3f5-41e0-4a0a-9e7d-593b8fc50ffd/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/f034a3f5-41e0-4a0a-9e7d-593b8fc50ffd/targets/test

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeBinding 1m5s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeBinding$
=== RUN   TestVolumeBinding
W1108 02:31:41.531972  111868 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1108 02:31:41.532030  111868 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I1108 02:31:41.532054  111868 master.go:309] Node port range unspecified. Defaulting to 30000-32767.
I1108 02:31:41.532068  111868 master.go:265] Using reconciler: 
I1108 02:31:41.534808  111868 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.535333  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.535396  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.537568  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.537617  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.541513  111868 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1108 02:31:41.541607  111868 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.542535  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.542664  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.542917  111868 reflector.go:188] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1108 02:31:41.545501  111868 watch_cache.go:409] Replace watchCache (rev: 31942) 
I1108 02:31:41.545728  111868 store.go:1342] Monitoring events count at <storage-prefix>//events
I1108 02:31:41.545901  111868 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.546069  111868 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1108 02:31:41.547421  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.547503  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.551137  111868 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1108 02:31:41.551237  111868 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.551487  111868 reflector.go:188] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1108 02:31:41.553266  111868 watch_cache.go:409] Replace watchCache (rev: 31943) 
I1108 02:31:41.554096  111868 watch_cache.go:409] Replace watchCache (rev: 31943) 
I1108 02:31:41.555074  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.555241  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.557829  111868 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1108 02:31:41.557970  111868 reflector.go:188] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1108 02:31:41.558193  111868 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.558651  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.558692  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.559407  111868 watch_cache.go:409] Replace watchCache (rev: 31943) 
I1108 02:31:41.561549  111868 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1108 02:31:41.561716  111868 reflector.go:188] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1108 02:31:41.561951  111868 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.564071  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.564212  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.563371  111868 watch_cache.go:409] Replace watchCache (rev: 31944) 
I1108 02:31:41.565941  111868 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1108 02:31:41.566595  111868 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.566917  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.567013  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.567215  111868 reflector.go:188] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1108 02:31:41.569216  111868 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1108 02:31:41.569363  111868 reflector.go:188] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1108 02:31:41.570213  111868 watch_cache.go:409] Replace watchCache (rev: 31945) 
I1108 02:31:41.570758  111868 watch_cache.go:409] Replace watchCache (rev: 31945) 
I1108 02:31:41.571188  111868 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.571434  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.571465  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.572761  111868 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1108 02:31:41.573042  111868 reflector.go:188] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1108 02:31:41.573291  111868 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.573520  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.573556  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.575484  111868 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1108 02:31:41.575737  111868 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.576014  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.576044  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.576103  111868 reflector.go:188] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1108 02:31:41.577076  111868 watch_cache.go:409] Replace watchCache (rev: 31945) 
I1108 02:31:41.577761  111868 watch_cache.go:409] Replace watchCache (rev: 31945) 
I1108 02:31:41.580311  111868 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1108 02:31:41.580435  111868 reflector.go:188] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1108 02:31:41.580634  111868 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.580913  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.580955  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.584636  111868 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1108 02:31:41.585468  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.585766  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.585834  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.586128  111868 watch_cache.go:409] Replace watchCache (rev: 31945) 
I1108 02:31:41.586140  111868 reflector.go:188] Listing and watching *core.Node from storage/cacher.go:/minions
I1108 02:31:41.590244  111868 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1108 02:31:41.590395  111868 reflector.go:188] Listing and watching *core.Pod from storage/cacher.go:/pods
I1108 02:31:41.590637  111868 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.591277  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.591316  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.593720  111868 watch_cache.go:409] Replace watchCache (rev: 31946) 
I1108 02:31:41.595909  111868 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1108 02:31:41.595990  111868 watch_cache.go:409] Replace watchCache (rev: 31947) 
I1108 02:31:41.596162  111868 reflector.go:188] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1108 02:31:41.596450  111868 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.596674  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.596714  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.598146  111868 watch_cache.go:409] Replace watchCache (rev: 31947) 
I1108 02:31:41.598463  111868 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1108 02:31:41.598537  111868 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.598805  111868 reflector.go:188] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1108 02:31:41.599085  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.599133  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.600521  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.600558  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.602386  111868 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.602787  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.602826  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.604826  111868 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1108 02:31:41.604991  111868 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1108 02:31:41.604933  111868 reflector.go:188] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1108 02:31:41.606204  111868 watch_cache.go:409] Replace watchCache (rev: 31948) 
I1108 02:31:41.606624  111868 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.606965  111868 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.607106  111868 watch_cache.go:409] Replace watchCache (rev: 31948) 
I1108 02:31:41.607909  111868 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.608624  111868 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.609888  111868 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.610977  111868 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.612258  111868 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.612656  111868 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.613144  111868 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.613808  111868 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.614511  111868 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.614887  111868 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.616160  111868 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.616526  111868 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.617234  111868 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.617740  111868 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.618478  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.618833  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.619089  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.619345  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.620261  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.620466  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.620688  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.621480  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.621814  111868 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.623048  111868 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.623811  111868 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.624123  111868 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.624395  111868 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.625634  111868 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.625952  111868 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.626648  111868 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.627768  111868 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.628390  111868 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.629596  111868 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.629921  111868 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.630041  111868 master.go:493] Skipping disabled API group "auditregistration.k8s.io".
I1108 02:31:41.630066  111868 master.go:504] Enabling API group "authentication.k8s.io".
I1108 02:31:41.630083  111868 master.go:504] Enabling API group "authorization.k8s.io".
I1108 02:31:41.630369  111868 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.630582  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.630613  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.632054  111868 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1108 02:31:41.632187  111868 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1108 02:31:41.632878  111868 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.633123  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.633155  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.634710  111868 watch_cache.go:409] Replace watchCache (rev: 31948) 
I1108 02:31:41.637003  111868 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1108 02:31:41.637101  111868 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1108 02:31:41.637318  111868 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.638018  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.638065  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.638510  111868 watch_cache.go:409] Replace watchCache (rev: 31948) 
I1108 02:31:41.639911  111868 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1108 02:31:41.640096  111868 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1108 02:31:41.641098  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.641264  111868 master.go:504] Enabling API group "autoscaling".
I1108 02:31:41.641613  111868 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.641949  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.642059  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.643305  111868 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1108 02:31:41.643366  111868 reflector.go:188] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1108 02:31:41.643807  111868 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.644134  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.644224  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.644535  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.645172  111868 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1108 02:31:41.645259  111868 reflector.go:188] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1108 02:31:41.645336  111868 master.go:504] Enabling API group "batch".
I1108 02:31:41.645625  111868 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.645886  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.646060  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.646364  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.647459  111868 reflector.go:188] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1108 02:31:41.647275  111868 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1108 02:31:41.647914  111868 master.go:504] Enabling API group "certificates.k8s.io".
I1108 02:31:41.648182  111868 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.648382  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.648405  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.650474  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.653181  111868 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1108 02:31:41.653448  111868 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1108 02:31:41.653992  111868 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.654223  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.654284  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.656119  111868 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1108 02:31:41.656222  111868 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1108 02:31:41.656651  111868 master.go:504] Enabling API group "coordination.k8s.io".
I1108 02:31:41.656693  111868 master.go:493] Skipping disabled API group "discovery.k8s.io".
I1108 02:31:41.656989  111868 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.657042  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.657255  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.657307  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.657895  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.658491  111868 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1108 02:31:41.658627  111868 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1108 02:31:41.658737  111868 master.go:504] Enabling API group "extensions".
I1108 02:31:41.659080  111868 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.659560  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.659722  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.659649  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.661778  111868 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1108 02:31:41.662149  111868 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.662377  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.662462  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.662656  111868 reflector.go:188] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1108 02:31:41.664712  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.665217  111868 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1108 02:31:41.665474  111868 master.go:504] Enabling API group "networking.k8s.io".
I1108 02:31:41.665776  111868 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.666088  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.666231  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.665388  111868 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1108 02:31:41.667181  111868 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1108 02:31:41.667205  111868 master.go:504] Enabling API group "node.k8s.io".
I1108 02:31:41.667397  111868 reflector.go:188] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1108 02:31:41.667456  111868 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.667618  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.667729  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.668913  111868 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1108 02:31:41.669044  111868 reflector.go:188] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1108 02:31:41.669355  111868 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.669569  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.669595  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.669607  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.670618  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.671050  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.671336  111868 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1108 02:31:41.671357  111868 master.go:504] Enabling API group "policy".
I1108 02:31:41.671414  111868 reflector.go:188] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1108 02:31:41.671742  111868 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.671939  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.671967  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.673581  111868 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1108 02:31:41.673775  111868 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1108 02:31:41.673792  111868 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.674011  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.674039  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.675183  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.675738  111868 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1108 02:31:41.676003  111868 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.675829  111868 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1108 02:31:41.676430  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.676797  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.677286  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.677513  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.677796  111868 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1108 02:31:41.677907  111868 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1108 02:31:41.678137  111868 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.678295  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.678317  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.679779  111868 watch_cache.go:409] Replace watchCache (rev: 31949) 
I1108 02:31:41.681058  111868 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1108 02:31:41.681136  111868 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1108 02:31:41.681650  111868 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.681977  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.682070  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.683207  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.683320  111868 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1108 02:31:41.683425  111868 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1108 02:31:41.683529  111868 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.683731  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.683769  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.684323  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.684476  111868 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1108 02:31:41.684548  111868 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1108 02:31:41.684544  111868 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.685370  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.685403  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.686145  111868 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1108 02:31:41.686461  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.686488  111868 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1108 02:31:41.686494  111868 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.686722  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.686748  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.687209  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.688167  111868 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1108 02:31:41.688205  111868 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1108 02:31:41.688215  111868 master.go:504] Enabling API group "rbac.authorization.k8s.io".
I1108 02:31:41.688980  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.692096  111868 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.692335  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.692370  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.693666  111868 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1108 02:31:41.693966  111868 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.694144  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.694176  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.694320  111868 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1108 02:31:41.695819  111868 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1108 02:31:41.695935  111868 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1108 02:31:41.695945  111868 master.go:504] Enabling API group "scheduling.k8s.io".
I1108 02:31:41.697212  111868 master.go:493] Skipping disabled API group "settings.k8s.io".
I1108 02:31:41.697347  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.698166  111868 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.698245  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.698516  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.698575  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.700833  111868 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1108 02:31:41.700919  111868 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1108 02:31:41.701235  111868 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.701674  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.701742  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.702183  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.703271  111868 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1108 02:31:41.703328  111868 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1108 02:31:41.703415  111868 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.703704  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.703741  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.705037  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.705965  111868 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1108 02:31:41.706061  111868 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1108 02:31:41.706305  111868 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.706562  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.706889  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.707932  111868 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1108 02:31:41.708044  111868 reflector.go:188] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1108 02:31:41.709339  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.709686  111868 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.709925  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.709956  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.710154  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.711404  111868 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1108 02:31:41.711480  111868 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1108 02:31:41.712416  111868 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.713277  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.713880  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.713929  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.716305  111868 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1108 02:31:41.716449  111868 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.716641  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.716682  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.716970  111868 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1108 02:31:41.718665  111868 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1108 02:31:41.718701  111868 master.go:504] Enabling API group "storage.k8s.io".
I1108 02:31:41.718930  111868 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1108 02:31:41.719559  111868 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.719834  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.719917  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.720136  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.720159  111868 watch_cache.go:409] Replace watchCache (rev: 31950) 
I1108 02:31:41.723997  111868 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1108 02:31:41.724300  111868 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.724375  111868 reflector.go:188] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1108 02:31:41.724490  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.724514  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.727176  111868 watch_cache.go:409] Replace watchCache (rev: 31951) 
I1108 02:31:41.729081  111868 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1108 02:31:41.729937  111868 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.730198  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.730239  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.730301  111868 reflector.go:188] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1108 02:31:41.736289  111868 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1108 02:31:41.736643  111868 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.736918  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.736968  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.737174  111868 reflector.go:188] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1108 02:31:41.742041  111868 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1108 02:31:41.742258  111868 watch_cache.go:409] Replace watchCache (rev: 31952) 
I1108 02:31:41.742442  111868 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.742556  111868 watch_cache.go:409] Replace watchCache (rev: 31952) 
I1108 02:31:41.742854  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.742885  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.744340  111868 reflector.go:188] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1108 02:31:41.744914  111868 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1108 02:31:41.745124  111868 master.go:504] Enabling API group "apps".
I1108 02:31:41.745383  111868 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.749306  111868 watch_cache.go:409] Replace watchCache (rev: 31952) 
I1108 02:31:41.745048  111868 reflector.go:188] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1108 02:31:41.753957  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.754043  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.756965  111868 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1108 02:31:41.757062  111868 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1108 02:31:41.757078  111868 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.757387  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.757430  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.758089  111868 watch_cache.go:409] Replace watchCache (rev: 31952) 
I1108 02:31:41.762524  111868 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1108 02:31:41.762637  111868 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1108 02:31:41.762643  111868 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.762893  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.762936  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.768190  111868 watch_cache.go:409] Replace watchCache (rev: 31953) 
I1108 02:31:41.768279  111868 watch_cache.go:409] Replace watchCache (rev: 31953) 
I1108 02:31:41.769060  111868 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1108 02:31:41.769162  111868 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1108 02:31:41.770103  111868 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.770248  111868 watch_cache.go:409] Replace watchCache (rev: 31953) 
I1108 02:31:41.770474  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.770568  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.771615  111868 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1108 02:31:41.771651  111868 master.go:504] Enabling API group "admissionregistration.k8s.io".
I1108 02:31:41.771660  111868 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1108 02:31:41.771741  111868 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.772229  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:41.772261  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:41.776148  111868 watch_cache.go:409] Replace watchCache (rev: 31953) 
I1108 02:31:41.790496  111868 store.go:1342] Monitoring events count at <storage-prefix>//events
I1108 02:31:41.790678  111868 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1108 02:31:41.791029  111868 master.go:504] Enabling API group "events.k8s.io".
I1108 02:31:41.791969  111868 watch_cache.go:409] Replace watchCache (rev: 31954) 
I1108 02:31:41.793044  111868 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.793645  111868 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.794401  111868 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.794888  111868 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.795338  111868 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.795734  111868 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.796122  111868 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.796413  111868 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.796711  111868 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.797048  111868 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.798476  111868 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.802501  111868 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.804062  111868 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.804821  111868 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.806355  111868 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.806772  111868 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.808664  111868 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.809021  111868 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.810192  111868 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.810759  111868 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1108 02:31:41.810870  111868 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1108 02:31:41.813423  111868 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.817372  111868 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.818067  111868 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.820419  111868 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.821566  111868 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.822921  111868 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.823624  111868 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.831008  111868 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.832531  111868 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.833076  111868 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.834915  111868 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1108 02:31:41.835019  111868 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1108 02:31:41.836598  111868 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.837151  111868 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.838023  111868 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.839933  111868 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.843874  111868 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.847461  111868 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.850676  111868 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.852687  111868 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.853584  111868 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.855889  111868 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.856981  111868 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1108 02:31:41.857229  111868 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1108 02:31:41.859038  111868 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.860284  111868 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1108 02:31:41.860880  111868 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1108 02:31:41.864542  111868 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.865718  111868 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.870836  111868 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.871498  111868 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.872622  111868 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.873720  111868 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.875007  111868 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.875804  111868 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1108 02:31:41.875900  111868 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1108 02:31:41.877113  111868 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.878515  111868 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.878988  111868 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.881032  111868 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.881969  111868 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.882339  111868 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.883500  111868 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.883906  111868 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.884249  111868 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.885772  111868 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.886139  111868 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.886507  111868 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1108 02:31:41.886638  111868 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1108 02:31:41.886656  111868 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1108 02:31:41.887970  111868 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.888821  111868 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.889611  111868 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.891008  111868 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1108 02:31:41.891874  111868 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"89d04f6a-c49a-49bf-9f59-18031bc0a51b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1108 02:31:41.896491  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1108 02:31:41.896664  111868 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1108 02:31:41.896686  111868 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I1108 02:31:41.897248  111868 reflector.go:153] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1108 02:31:41.897273  111868 reflector.go:188] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1108 02:31:41.898661  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:41.898692  111868 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1108 02:31:41.898703  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:41.898720  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:41.898732  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:41.898767  111868 httplog.go:90] GET /healthz: (265.353µs) 0 [Go-http-client/1.1 127.0.0.1:33104]
I1108 02:31:41.898825  111868 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0: (872.637µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:41.899899  111868 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.765088ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33106]
I1108 02:31:41.902435  111868 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=31945 labels= fields= timeout=5m47s
I1108 02:31:41.904744  111868 httplog.go:90] GET /api/v1/services: (1.590482ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:41.917240  111868 httplog.go:90] GET /api/v1/services: (5.934926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:41.923215  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:41.923273  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:41.923289  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:41.923301  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:41.923331  111868 httplog.go:90] GET /healthz: (292.075µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33112]
I1108 02:31:41.926473  111868 httplog.go:90] GET /api/v1/services: (1.94728ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33112]
I1108 02:31:41.929205  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (6.105814ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:41.929338  111868 httplog.go:90] GET /api/v1/services: (2.366776ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33112]
I1108 02:31:41.933035  111868 httplog.go:90] POST /api/v1/namespaces: (3.230332ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:41.935482  111868 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.999404ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:41.940007  111868 httplog.go:90] POST /api/v1/namespaces: (3.946943ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:41.942231  111868 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.413552ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:41.944863  111868 httplog.go:90] POST /api/v1/namespaces: (2.111806ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:41.996903  111868 shared_informer.go:227] caches populated
I1108 02:31:41.996941  111868 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I1108 02:31:42.002692  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.002736  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.002755  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.002764  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.002812  111868 httplog.go:90] GET /healthz: (295.918µs) 0 [Go-http-client/1.1 127.0.0.1:33108]
I1108 02:31:42.024511  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.024552  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.024576  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.024586  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.024625  111868 httplog.go:90] GET /healthz: (287.832µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.104454  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.104499  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.104511  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.104520  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.104598  111868 httplog.go:90] GET /healthz: (339.521µs) 0 [Go-http-client/1.1 127.0.0.1:33108]
I1108 02:31:42.124426  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.124460  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.124470  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.124476  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.124508  111868 httplog.go:90] GET /healthz: (242.57µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.204694  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.204731  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.204746  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.204756  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.204792  111868 httplog.go:90] GET /healthz: (310.913µs) 0 [Go-http-client/1.1 127.0.0.1:33108]
I1108 02:31:42.224485  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.224517  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.224530  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.224556  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.224598  111868 httplog.go:90] GET /healthz: (291.601µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.302811  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.302887  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.302903  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.302915  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.302960  111868 httplog.go:90] GET /healthz: (320.015µs) 0 [Go-http-client/1.1 127.0.0.1:33108]
I1108 02:31:42.324546  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.324587  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.324620  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.324632  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.324670  111868 httplog.go:90] GET /healthz: (329.069µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.402757  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.402791  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.402805  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.402815  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.402866  111868 httplog.go:90] GET /healthz: (283.875µs) 0 [Go-http-client/1.1 127.0.0.1:33108]
I1108 02:31:42.424422  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.424460  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.424472  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.424480  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.424517  111868 httplog.go:90] GET /healthz: (274.347µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.502768  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.502802  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.502819  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.502828  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.502874  111868 httplog.go:90] GET /healthz: (290.427µs) 0 [Go-http-client/1.1 127.0.0.1:33108]
I1108 02:31:42.524459  111868 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1108 02:31:42.524501  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.524514  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.524523  111868 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.524571  111868 httplog.go:90] GET /healthz: (292.656µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.531311  111868 client.go:361] parsed scheme: "endpoint"
I1108 02:31:42.531449  111868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1108 02:31:42.612492  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.612523  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.612533  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.612606  111868 httplog.go:90] GET /healthz: (8.339515ms) 0 [Go-http-client/1.1 127.0.0.1:33108]
I1108 02:31:42.630123  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.630161  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.630175  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.630241  111868 httplog.go:90] GET /healthz: (5.438894ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.704328  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.704359  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.704367  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.704424  111868 httplog.go:90] GET /healthz: (1.31279ms) 0 [Go-http-client/1.1 127.0.0.1:33108]
I1108 02:31:42.725405  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.725436  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.725451  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.725495  111868 httplog.go:90] GET /healthz: (1.169108ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.805239  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.805283  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.805300  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.805376  111868 httplog.go:90] GET /healthz: (2.823996ms) 0 [Go-http-client/1.1 127.0.0.1:33108]
I1108 02:31:42.825315  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.825341  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.825353  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.825397  111868 httplog.go:90] GET /healthz: (1.122652ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.901516  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.454814ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.902356  111868 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical: (5.223594ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.905903  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.90664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.907374  111868 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (3.871446ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.908465  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.908494  111868 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1108 02:31:42.908504  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.908548  111868 httplog.go:90] GET /healthz: (3.358077ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:42.908698  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.401938ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33108]
I1108 02:31:42.908719  111868 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I1108 02:31:42.910882  111868 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical: (1.921722ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:42.911136  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.030126ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.912561  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.086567ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.913941  111868 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (2.619922ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:42.915167  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.158546ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.915363  111868 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I1108 02:31:42.915404  111868 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I1108 02:31:42.917142  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.516897ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.920160  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.60674ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.922994  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.853187ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.926697  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (3.204897ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.927026  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:42.927056  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:42.927089  111868 httplog.go:90] GET /healthz: (2.884422ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:42.934489  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.180398ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.935133  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1108 02:31:42.940092  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (4.70475ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.943924  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.243859ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.944239  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1108 02:31:42.961745  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (17.096562ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.967448  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.945789ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.967994  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1108 02:31:42.972620  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (4.318187ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.975441  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.221158ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.975688  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1108 02:31:42.981727  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (5.695933ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.986820  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.126456ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.987363  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1108 02:31:42.989217  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.111168ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.992811  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.917373ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:42.993325  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1108 02:31:42.999402  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (5.398507ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.003569  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.103855ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.004034  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1108 02:31:43.006815  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.007008  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.540496ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.007015  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.007311  111868 httplog.go:90] GET /healthz: (4.868254ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:43.013684  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.945315ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.014222  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1108 02:31:43.017210  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.541463ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.022024  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.572945ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.022622  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1108 02:31:43.024090  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.170392ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.025698  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.025725  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.025771  111868 httplog.go:90] GET /healthz: (1.642863ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.028702  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.919573ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.029340  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1108 02:31:43.032484  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (2.517731ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.037973  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.876436ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.039753  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1108 02:31:43.041737  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.502508ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.048668  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.294643ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.049404  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1108 02:31:43.053570  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (3.771983ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.057649  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.129254ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.058136  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1108 02:31:43.059595  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.145119ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.063458  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.044869ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.064481  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1108 02:31:43.066927  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (2.197554ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.071494  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.906234ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.071785  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1108 02:31:43.076634  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (4.517021ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.080251  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.523153ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.080668  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1108 02:31:43.082834  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.405047ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.087725  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.37882ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.088032  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1108 02:31:43.089818  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.543943ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.094591  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.582629ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.095072  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1108 02:31:43.099471  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (3.821376ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.103296  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.836082ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.103571  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1108 02:31:43.104527  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.104552  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.104602  111868 httplog.go:90] GET /healthz: (1.619536ms) 0 [Go-http-client/1.1 127.0.0.1:33114]
I1108 02:31:43.105014  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.203814ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.107486  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.035016ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.107736  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1108 02:31:43.109527  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.495728ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.112170  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.161189ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.112458  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1108 02:31:43.113820  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.132686ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.117324  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.02358ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.117713  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1108 02:31:43.119166  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.155371ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.121691  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.981137ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.122101  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1108 02:31:43.124425  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.943975ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.125653  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.125674  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.125714  111868 httplog.go:90] GET /healthz: (1.489647ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.127456  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.509357ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.127729  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1108 02:31:43.129265  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.293866ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.133347  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.561309ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.133818  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1108 02:31:43.135277  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.117497ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.137944  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.997ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.138557  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1108 02:31:43.140440  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.47786ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.143398  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.256665ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.143719  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1108 02:31:43.145085  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.07254ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.147712  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.996205ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.147992  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1108 02:31:43.149567  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.28309ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.152006  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.893276ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.152352  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1108 02:31:43.153950  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.278155ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.156781  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.162885ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.157340  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1108 02:31:43.158936  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.356106ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.162142  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.264579ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.162634  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1108 02:31:43.164592  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.526194ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.167950  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.613702ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.168258  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1108 02:31:43.169831  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.260174ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.173174  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.750844ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.173670  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1108 02:31:43.175297  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.368191ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.181534  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.983678ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.181864  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1108 02:31:43.183312  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.219925ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.185890  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.048849ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.186328  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1108 02:31:43.188816  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (2.105583ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.192496  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.97389ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.192769  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1108 02:31:43.196529  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (3.418259ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.200386  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.808668ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.200763  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1108 02:31:43.202524  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.448276ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.203497  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.203524  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.203603  111868 httplog.go:90] GET /healthz: (1.08589ms) 0 [Go-http-client/1.1 127.0.0.1:33114]
I1108 02:31:43.206102  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.474194ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.206557  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1108 02:31:43.208216  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.32173ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.211305  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.454761ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.211764  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1108 02:31:43.214366  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (2.324732ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.217532  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.561048ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.217991  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1108 02:31:43.219567  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.165523ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.222716  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.681516ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.223223  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1108 02:31:43.224968  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.471189ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.225009  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.225302  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.225659  111868 httplog.go:90] GET /healthz: (1.327493ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.228544  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.415173ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.228956  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1108 02:31:43.230623  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.431801ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.233997  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.568446ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.234488  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1108 02:31:43.236235  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.435895ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.239698  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.798545ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.240125  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1108 02:31:43.242012  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.497442ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.245095  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.517267ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.245354  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1108 02:31:43.246796  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.212361ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.249338  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.092282ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.249688  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1108 02:31:43.251132  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.201886ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.254376  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.489815ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.254799  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1108 02:31:43.256345  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.293009ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.259787  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.812202ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.260213  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1108 02:31:43.261720  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.168241ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.264990  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.710842ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.265572  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1108 02:31:43.267217  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.259062ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.270736  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.595641ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.271271  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1108 02:31:43.272940  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.386102ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.275483  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.819453ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.276071  111868 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1108 02:31:43.278050  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.6724ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.285134  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.565306ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.285636  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1108 02:31:43.287145  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.238847ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.289633  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.093542ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.289899  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1108 02:31:43.291277  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.173229ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.293450  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.740853ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.293647  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1108 02:31:43.295392  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.506461ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.297989  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.187885ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.298489  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1108 02:31:43.304882  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.305031  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.305115  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (3.627216ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:43.306161  111868 httplog.go:90] GET /healthz: (3.334804ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:43.319992  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.750168ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.320365  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1108 02:31:43.325737  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.325763  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.325812  111868 httplog.go:90] GET /healthz: (1.503098ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.338922  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.685924ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.367664  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.504322ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.368051  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1108 02:31:43.379247  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.029841ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.400888  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.554045ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.401210  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1108 02:31:43.405496  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.405542  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.405667  111868 httplog.go:90] GET /healthz: (2.724131ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:43.420329  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (3.151349ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.426759  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.426832  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.426930  111868 httplog.go:90] GET /healthz: (2.707959ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.440102  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.865958ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.440582  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1108 02:31:43.459356  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.98243ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.480378  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.096881ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.480962  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1108 02:31:43.499123  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.718068ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.504018  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.504065  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.504106  111868 httplog.go:90] GET /healthz: (1.470828ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:43.520146  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.842592ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.520418  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1108 02:31:43.527665  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.527891  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.528081  111868 httplog.go:90] GET /healthz: (1.832057ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.539017  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.844945ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.560961  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.689036ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.561288  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1108 02:31:43.578942  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.465142ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.600129  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.835676ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.600487  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1108 02:31:43.604177  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.604205  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.604268  111868 httplog.go:90] GET /healthz: (1.774736ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:43.619371  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.1016ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.625782  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.625815  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.625945  111868 httplog.go:90] GET /healthz: (1.623077ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.641256  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.018687ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.641809  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1108 02:31:43.659006  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.761966ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.680376  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.161755ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.680819  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1108 02:31:43.700194  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.873114ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.703750  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.703781  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.703860  111868 httplog.go:90] GET /healthz: (1.426281ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:43.721093  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.81876ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.721377  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1108 02:31:43.728448  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.728506  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.728549  111868 httplog.go:90] GET /healthz: (1.516702ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.739176  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (2.006789ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.760427  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.009434ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.761048  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1108 02:31:43.778727  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.536663ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.800407  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.126346ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.801003  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1108 02:31:43.803555  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.803586  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.803636  111868 httplog.go:90] GET /healthz: (1.153797ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:43.818925  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.661993ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.826916  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.826952  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.827013  111868 httplog.go:90] GET /healthz: (1.438262ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.840386  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.0251ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.840759  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1108 02:31:43.859100  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.766928ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.880212  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.012938ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.880505  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1108 02:31:43.898933  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.686843ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.906162  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.906195  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.906261  111868 httplog.go:90] GET /healthz: (3.741198ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:43.922721  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.402523ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.923081  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1108 02:31:43.925791  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:43.925824  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:43.925888  111868 httplog.go:90] GET /healthz: (1.699088ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.940275  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (3.079561ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.960630  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.433907ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:43.961181  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1108 02:31:43.979642  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (2.259574ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.001811  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.394132ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.002122  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1108 02:31:44.003715  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.003784  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.003922  111868 httplog.go:90] GET /healthz: (1.343197ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:44.019171  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.951164ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.026991  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.027021  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.027075  111868 httplog.go:90] GET /healthz: (2.574465ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.040280  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.006424ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.040587  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1108 02:31:44.058724  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.518408ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.082755  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.606739ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.083049  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1108 02:31:44.098717  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.53834ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.104541  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.104576  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.104626  111868 httplog.go:90] GET /healthz: (2.034382ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:44.120011  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.798114ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.120328  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1108 02:31:44.126261  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.126317  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.126367  111868 httplog.go:90] GET /healthz: (2.015291ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.138603  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.334038ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.165330  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.299763ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.165583  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1108 02:31:44.178882  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.650235ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.200142  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.87524ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.200574  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1108 02:31:44.205493  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.205522  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.205613  111868 httplog.go:90] GET /healthz: (2.829899ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:44.218787  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.600461ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.225363  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.225403  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.225461  111868 httplog.go:90] GET /healthz: (1.1628ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.239971  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.742787ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.240287  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1108 02:31:44.258975  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.668857ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.280115  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.781283ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.281454  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1108 02:31:44.299087  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.78212ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.303501  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.303864  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.304128  111868 httplog.go:90] GET /healthz: (1.644465ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:44.328584  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (11.241624ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.329101  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.329125  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.329177  111868 httplog.go:90] GET /healthz: (4.823019ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:44.329700  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1108 02:31:44.338959  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.721721ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.360186  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.960433ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.360505  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1108 02:31:44.378959  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.703885ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.399949  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.487597ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.400453  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1108 02:31:44.406440  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.406481  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.406536  111868 httplog.go:90] GET /healthz: (4.038156ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:44.418862  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.631736ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.425670  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.425714  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.425760  111868 httplog.go:90] GET /healthz: (1.389035ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.440336  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.997199ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.440792  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1108 02:31:44.458941  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.635898ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.479522  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.254848ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.479968  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1108 02:31:44.498889  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.632199ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.504014  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.504085  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.504127  111868 httplog.go:90] GET /healthz: (1.608698ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:44.519886  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.66141ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.520234  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1108 02:31:44.525328  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.525361  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.525407  111868 httplog.go:90] GET /healthz: (1.201231ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.539998  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (2.716375ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.559686  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.521503ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.560271  111868 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1108 02:31:44.578815  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.669365ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.581070  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.680004ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.603753  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.603783  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.603877  111868 httplog.go:90] GET /healthz: (1.383498ms) 0 [Go-http-client/1.1 127.0.0.1:33114]
I1108 02:31:44.604825  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (7.537502ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.605406  111868 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1108 02:31:44.619718  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.216211ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.623335  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.048327ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.632740  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.632795  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.632901  111868 httplog.go:90] GET /healthz: (8.657748ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.641829  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.506708ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.642357  111868 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1108 02:31:44.661130  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (3.908389ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.665024  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.950142ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.680063  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.820067ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.680375  111868 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1108 02:31:44.699560  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (2.371996ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.702428  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.080024ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.704008  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.704031  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.704075  111868 httplog.go:90] GET /healthz: (1.258212ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:44.721646  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.693728ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.722031  111868 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1108 02:31:44.726691  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.726744  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.726806  111868 httplog.go:90] GET /healthz: (2.533164ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.738734  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.533244ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.741283  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.936257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.760678  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.356827ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.761278  111868 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1108 02:31:44.778944  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.62966ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.781656  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.823222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.800779  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.416765ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.801777  111868 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1108 02:31:44.803641  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.803681  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.803740  111868 httplog.go:90] GET /healthz: (1.239991ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:44.819324  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.056979ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.821638  111868 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.648634ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.825582  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.825623  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.825672  111868 httplog.go:90] GET /healthz: (1.509169ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.841228  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.717004ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.841676  111868 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1108 02:31:44.859310  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (2.027297ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.862106  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.83997ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.881265  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.963491ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.881619  111868 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1108 02:31:44.898873  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.594911ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.901211  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.712094ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.903688  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.903720  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.903752  111868 httplog.go:90] GET /healthz: (1.345695ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:44.920129  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.939256ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.920478  111868 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1108 02:31:44.925451  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:44.925486  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:44.925552  111868 httplog.go:90] GET /healthz: (1.280898ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.938735  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.577093ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.941326  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.988131ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.961398  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.086157ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.961762  111868 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1108 02:31:44.979134  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.870003ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.983226  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.586427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:44.999644  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.441288ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.000490  111868 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1108 02:31:45.003655  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:45.003687  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:45.003755  111868 httplog.go:90] GET /healthz: (1.172022ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:45.018977  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.694595ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.021371  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.908536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.025534  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:45.025574  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:45.025617  111868 httplog.go:90] GET /healthz: (1.371211ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.040197  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.02434ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.040595  111868 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1108 02:31:45.059068  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.734204ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.061665  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.04426ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.079807  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.517379ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.080141  111868 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1108 02:31:45.098762  111868 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.498886ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.100788  111868 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.522817ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.103372  111868 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1108 02:31:45.103490  111868 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1108 02:31:45.103563  111868 httplog.go:90] GET /healthz: (1.165467ms) 0 [Go-http-client/1.1 127.0.0.1:33474]
I1108 02:31:45.120089  111868 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.778051ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.120438  111868 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1108 02:31:45.126762  111868 httplog.go:90] GET /healthz: (2.440335ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.129327  111868 httplog.go:90] GET /api/v1/namespaces/default: (1.978864ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.132249  111868 httplog.go:90] POST /api/v1/namespaces: (2.280059ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.134299  111868 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.452441ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.141734  111868 httplog.go:90] POST /api/v1/namespaces/default/services: (6.893001ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.143650  111868 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.251939ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.148472  111868 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (4.296635ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.204979  111868 httplog.go:90] GET /healthz: (2.409253ms) 200 [Go-http-client/1.1 127.0.0.1:33474]
W1108 02:31:45.206036  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206068  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206095  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206207  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206220  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206234  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206252  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206283  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206296  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206308  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.206316  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1108 02:31:45.206338  111868 factory.go:300] Creating scheduler from algorithm provider 'DefaultProvider'
I1108 02:31:45.206350  111868 factory.go:392] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1108 02:31:45.207573  111868 reflector.go:153] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.207603  111868 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.207652  111868 reflector.go:153] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.207669  111868 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.208262  111868 reflector.go:153] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.208281  111868 reflector.go:188] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209198  111868 reflector.go:153] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209214  111868 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209521  111868 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.502493ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.209602  111868 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (1.223617ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:45.208154  111868 reflector.go:153] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209624  111868 reflector.go:153] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209681  111868 reflector.go:188] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209774  111868 reflector.go:153] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209792  111868 reflector.go:188] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209646  111868 reflector.go:153] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209955  111868 reflector.go:188] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.209672  111868 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.210345  111868 reflector.go:153] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.210358  111868 reflector.go:188] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.210753  111868 reflector.go:153] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.210766  111868 reflector.go:188] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.211055  111868 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (616.754µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:45.211187  111868 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (909.55µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I1108 02:31:45.211218  111868 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.211230  111868 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.211235  111868 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (985.233µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33474]
I1108 02:31:45.212087  111868 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (1.251786ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I1108 02:31:45.213216  111868 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (1.006556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33796]
I1108 02:31:45.214263  111868 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=31950 labels= fields= timeout=7m29s
I1108 02:31:45.214340  111868 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (1.051302ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33114]
I1108 02:31:45.214545  111868 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=31950 labels= fields= timeout=9m43s
I1108 02:31:45.215392  111868 get.go:251] Starting watch for /api/v1/services, rv=32317 labels= fields= timeout=7m15s
I1108 02:31:45.216084  111868 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (2.73268ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33800]
I1108 02:31:45.216294  111868 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (2.764315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I1108 02:31:45.216498  111868 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (2.976622ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I1108 02:31:45.219211  111868 get.go:251] Starting watch for /api/v1/nodes, rv=31946 labels= fields= timeout=9m52s
I1108 02:31:45.222434  111868 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=31945 labels= fields= timeout=5m13s
I1108 02:31:45.223448  111868 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=31949 labels= fields= timeout=6m34s
I1108 02:31:45.231872  111868 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=31945 labels= fields= timeout=9m45s
I1108 02:31:45.235591  111868 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=31952 labels= fields= timeout=8m45s
I1108 02:31:45.237789  111868 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=31952 labels= fields= timeout=7m40s
I1108 02:31:45.240266  111868 get.go:251] Starting watch for /api/v1/pods, rv=31947 labels= fields= timeout=7m17s
I1108 02:31:45.240371  111868 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=31948 labels= fields= timeout=9m19s
I1108 02:31:45.307687  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307744  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307749  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307754  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307758  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307763  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307767  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307771  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307775  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307783  111868 shared_informer.go:227] caches populated
I1108 02:31:45.307787  111868 shared_informer.go:227] caches populated
I1108 02:31:45.308053  111868 plugins.go:631] Loaded volume plugin "kubernetes.io/mock-provisioner"
W1108 02:31:45.308086  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.308108  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.308128  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.308137  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1108 02:31:45.308146  111868 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1108 02:31:45.308229  111868 shared_informer.go:227] caches populated
I1108 02:31:45.308262  111868 pv_controller_base.go:289] Starting persistent volume controller
I1108 02:31:45.308267  111868 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1108 02:31:45.308557  111868 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.308581  111868 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.309017  111868 reflector.go:153] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.309036  111868 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.309103  111868 reflector.go:153] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.309114  111868 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.309114  111868 reflector.go:153] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.309129  111868 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.309779  111868 reflector.go:153] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.309794  111868 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I1108 02:31:45.310210  111868 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (551.717µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33828]
I1108 02:31:45.310210  111868 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (544.388µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33830]
I1108 02:31:45.310222  111868 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (515.935µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I1108 02:31:45.311222  111868 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (376.369µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1108 02:31:45.311417  111868 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (546.465µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33830]
I1108 02:31:45.311431  111868 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=31950 labels= fields= timeout=8m42s
I1108 02:31:45.311707  111868 get.go:251] Starting watch for /api/v1/nodes, rv=31946 labels= fields= timeout=7m27s
I1108 02:31:45.311997  111868 get.go:251] Starting watch for /api/v1/pods, rv=31947 labels= fields= timeout=5m19s
I1108 02:31:45.312260  111868 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=31945 labels= fields= timeout=6m18s
I1108 02:31:45.312568  111868 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=31945 labels= fields= timeout=9m37s
I1108 02:31:45.408378  111868 shared_informer.go:227] caches populated
I1108 02:31:45.408433  111868 shared_informer.go:204] Caches are synced for persistent volume 
I1108 02:31:45.408553  111868 pv_controller_base.go:160] controller initialized
I1108 02:31:45.408385  111868 shared_informer.go:227] caches populated
I1108 02:31:45.408726  111868 shared_informer.go:227] caches populated
I1108 02:31:45.408797  111868 shared_informer.go:227] caches populated
I1108 02:31:45.408889  111868 shared_informer.go:227] caches populated
I1108 02:31:45.408950  111868 shared_informer.go:227] caches populated
I1108 02:31:45.408732  111868 pv_controller_base.go:426] resyncing PV controller
I1108 02:31:45.413945  111868 httplog.go:90] POST /api/v1/nodes: (3.995448ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.415163  111868 node_tree.go:86] Added node "node-1" in group "" to NodeTree
I1108 02:31:45.418476  111868 httplog.go:90] POST /api/v1/nodes: (3.416186ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.419857  111868 node_tree.go:86] Added node "node-2" in group "" to NodeTree
I1108 02:31:45.424144  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.713137ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.428118  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.341848ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.428533  111868 volume_binding_test.go:191] Running test mix immediate and wait
I1108 02:31:45.455049  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (24.04982ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.460050  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.307206ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.467788  111868 httplog.go:90] POST /api/v1/persistentvolumes: (5.409423ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.468551  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-4", version 32337
I1108 02:31:45.468657  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Pending, bound to: "", boundByController: false
I1108 02:31:45.468720  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1108 02:31:45.468732  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1108 02:31:45.472462  111868 httplog.go:90] POST /api/v1/persistentvolumes: (3.888732ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.473537  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (4.030474ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.473880  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32339
I1108 02:31:45.473985  111868 pv_controller.go:796] volume "pv-w-canbind-4" entered phase "Available"
I1108 02:31:45.474038  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind-2", version 32338
I1108 02:31:45.474089  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1108 02:31:45.474156  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1108 02:31:45.474195  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1108 02:31:45.476731  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (2.226322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.477102  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32340
I1108 02:31:45.477138  111868 pv_controller.go:796] volume "pv-i-canbind-2" entered phase "Available"
I1108 02:31:45.477172  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32339
I1108 02:31:45.477192  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "", boundByController: false
I1108 02:31:45.477211  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1108 02:31:45.477218  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1108 02:31:45.477225  111868 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-4]: phase Available already set
I1108 02:31:45.477742  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32340
I1108 02:31:45.477774  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "", boundByController: false
I1108 02:31:45.477806  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1108 02:31:45.477815  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1108 02:31:45.477825  111868 pv_controller.go:778] updating PersistentVolume[pv-i-canbind-2]: phase Available already set
I1108 02:31:45.478114  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (4.376073ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.478407  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4", version 32341
I1108 02:31:45.478505  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:31:45.478544  111868 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: no volume found
I1108 02:31:45.478584  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4] status: set phase Pending
I1108 02:31:45.478603  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4] status: phase Pending already set
I1108 02:31:45.478966  111868 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7", Name:"pvc-w-canbind-4", UID:"09f29b18-28b9-44af-9d65-acd58826b506", APIVersion:"v1", ResourceVersion:"32341", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1108 02:31:45.480431  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (1.861729ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.480664  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2", version 32342
I1108 02:31:45.480693  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:31:45.480724  111868 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Available, bound to: "", boundByController: false
I1108 02:31:45.480734  111868 pv_controller.go:929] binding volume "pv-i-canbind-2" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2"
I1108 02:31:45.480746  111868 pv_controller.go:827] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2"
I1108 02:31:45.480773  111868 pv_controller.go:847] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2" bound to volume "pv-i-canbind-2"
I1108 02:31:45.485490  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2: (3.692433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:45.485985  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (7.011627ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.486076  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32345
I1108 02:31:45.486118  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 (uid: 32a5722d-6eb0-436d-a595-28d6eb1d6284)", boundByController: true
I1108 02:31:45.486141  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2
I1108 02:31:45.486191  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:31:45.486212  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:31:45.486564  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32345
I1108 02:31:45.486611  111868 pv_controller.go:860] updating PersistentVolume[pv-i-canbind-2]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2"
I1108 02:31:45.486624  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1108 02:31:45.489681  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (2.75698ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:45.490007  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32346
I1108 02:31:45.490056  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 (uid: 32a5722d-6eb0-436d-a595-28d6eb1d6284)", boundByController: true
I1108 02:31:45.490059  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32346
I1108 02:31:45.490076  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2
I1108 02:31:45.490088  111868 pv_controller.go:796] volume "pv-i-canbind-2" entered phase "Bound"
I1108 02:31:45.490096  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:31:45.490112  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:31:45.490103  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1108 02:31:45.490186  111868 pv_controller.go:899] volume "pv-i-canbind-2" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2"
I1108 02:31:45.495883  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-canbind-2: (5.300021ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:45.495961  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (14.27333ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I1108 02:31:45.496431  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2" with version 32348
I1108 02:31:45.496468  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2]: bound to "pv-i-canbind-2"
I1108 02:31:45.496481  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2] status: set phase Bound
I1108 02:31:45.497176  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound
I1108 02:31:45.497203  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound
I1108 02:31:45.497505  111868 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound" match with Node "node-1"
I1108 02:31:45.497569  111868 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound" on node "node-1"
I1108 02:31:45.497699  111868 scheduler_binder.go:653] PersistentVolume "pv-i-canbind-2", Node "node-2" mismatch for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound": No matching NodeSelectorTerms
I1108 02:31:45.497742  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" on node "node-2"
I1108 02:31:45.497759  111868 scheduler_binder.go:725] storage class "wait-h8ht" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" does not support dynamic provisioning
I1108 02:31:45.497996  111868 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound", node "node-1"
I1108 02:31:45.498066  111868 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-4", version 32339
I1108 02:31:45.498497  111868 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound", node "node-1"
I1108 02:31:45.498547  111868 scheduler_binder.go:404] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" bound to volume "pv-w-canbind-4"
I1108 02:31:45.499561  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-canbind-2/status: (2.81162ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:45.499780  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2" with version 32350
I1108 02:31:45.499805  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2" entered phase "Bound"
I1108 02:31:45.499819  111868 pv_controller.go:955] volume "pv-i-canbind-2" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2"
I1108 02:31:45.499876  111868 pv_controller.go:956] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 (uid: 32a5722d-6eb0-436d-a595-28d6eb1d6284)", boundByController: true
I1108 02:31:45.499901  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1108 02:31:45.499937  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2" with version 32350
I1108 02:31:45.499948  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2]: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1108 02:31:45.499960  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 (uid: 32a5722d-6eb0-436d-a595-28d6eb1d6284)", boundByController: true
I1108 02:31:45.499969  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2]: claim is already correctly bound
I1108 02:31:45.499976  111868 pv_controller.go:929] binding volume "pv-i-canbind-2" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2"
I1108 02:31:45.499985  111868 pv_controller.go:827] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2"
I1108 02:31:45.500001  111868 pv_controller.go:839] updating PersistentVolume[pv-i-canbind-2]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2"
I1108 02:31:45.500007  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1108 02:31:45.500013  111868 pv_controller.go:778] updating PersistentVolume[pv-i-canbind-2]: phase Bound already set
I1108 02:31:45.500022  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1108 02:31:45.500035  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2]: already bound to "pv-i-canbind-2"
I1108 02:31:45.500067  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2] status: set phase Bound
I1108 02:31:45.500089  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2] status: phase Bound already set
I1108 02:31:45.500098  111868 pv_controller.go:955] volume "pv-i-canbind-2" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2"
I1108 02:31:45.500111  111868 pv_controller.go:956] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 (uid: 32a5722d-6eb0-436d-a595-28d6eb1d6284)", boundByController: true
I1108 02:31:45.500120  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1108 02:31:45.502692  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4: (3.525758ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.502921  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32351
I1108 02:31:45.503152  111868 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-4]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.503365  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 (uid: 09f29b18-28b9-44af-9d65-acd58826b506)", boundByController: true
I1108 02:31:45.503380  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4
I1108 02:31:45.503399  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:31:45.503411  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:31:45.503447  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" with version 32341
I1108 02:31:45.503457  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:31:45.503532  111868 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 (uid: 09f29b18-28b9-44af-9d65-acd58826b506)", boundByController: true
I1108 02:31:45.503545  111868 pv_controller.go:929] binding volume "pv-w-canbind-4" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.503559  111868 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.503609  111868 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.503622  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1108 02:31:45.507039  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32352
I1108 02:31:45.507154  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 (uid: 09f29b18-28b9-44af-9d65-acd58826b506)", boundByController: true
I1108 02:31:45.507173  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4
I1108 02:31:45.507193  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:31:45.507209  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:31:45.507421  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (3.253344ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.507709  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32352
I1108 02:31:45.507771  111868 pv_controller.go:796] volume "pv-w-canbind-4" entered phase "Bound"
I1108 02:31:45.507787  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1108 02:31:45.507802  111868 pv_controller.go:899] volume "pv-w-canbind-4" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.513275  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-4: (5.158662ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.513742  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" with version 32354
I1108 02:31:45.513953  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: bound to "pv-w-canbind-4"
I1108 02:31:45.514079  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4] status: set phase Bound
I1108 02:31:45.516615  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-4/status: (2.126017ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.516973  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" with version 32355
I1108 02:31:45.517004  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" entered phase "Bound"
I1108 02:31:45.517019  111868 pv_controller.go:955] volume "pv-w-canbind-4" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.517038  111868 pv_controller.go:956] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 (uid: 09f29b18-28b9-44af-9d65-acd58826b506)", boundByController: true
I1108 02:31:45.517049  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1108 02:31:45.517077  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" with version 32355
I1108 02:31:45.517086  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1108 02:31:45.517098  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 (uid: 09f29b18-28b9-44af-9d65-acd58826b506)", boundByController: true
I1108 02:31:45.517106  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: claim is already correctly bound
I1108 02:31:45.517182  111868 pv_controller.go:929] binding volume "pv-w-canbind-4" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.517191  111868 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.517207  111868 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.517216  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1108 02:31:45.517222  111868 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-4]: phase Bound already set
I1108 02:31:45.517229  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1108 02:31:45.517295  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4]: already bound to "pv-w-canbind-4"
I1108 02:31:45.517306  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4] status: set phase Bound
I1108 02:31:45.517327  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4] status: phase Bound already set
I1108 02:31:45.517338  111868 pv_controller.go:955] volume "pv-w-canbind-4" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4"
I1108 02:31:45.517373  111868 pv_controller.go:956] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 (uid: 09f29b18-28b9-44af-9d65-acd58826b506)", boundByController: true
I1108 02:31:45.517384  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1108 02:31:45.600538  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (3.2533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.698917  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (1.724861ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.802627  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (5.355805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.900682  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (1.882422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:45.999778  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (2.640712ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.099244  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (2.036203ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.203796  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (6.612347ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.206321  111868 cache.go:656] Couldn't expire cache for pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound. Binding is still in progress.
I1108 02:31:46.298912  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (1.739027ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.399376  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (2.267872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.501489  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (4.321837ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.503566  111868 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound" are bound
I1108 02:31:46.503725  111868 factory.go:698] Attempting to bind pod-mix-bound to node-1
I1108 02:31:46.509429  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound/binding: (5.241901ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.509881  111868 scheduler.go:756] pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1108 02:31:46.516276  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (5.750285ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.599289  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-mix-bound: (2.070074ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.603321  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-4: (3.381226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.606495  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-canbind-2: (2.327597ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.609312  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-4: (2.055937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.611678  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind-2: (1.555672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.623897  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (11.541226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.632229  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2" deleted
I1108 02:31:46.632284  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32346
I1108 02:31:46.632323  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 (uid: 32a5722d-6eb0-436d-a595-28d6eb1d6284)", boundByController: true
I1108 02:31:46.632336  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2
I1108 02:31:46.634021  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-canbind-2: (1.400147ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:46.634286  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 not found
I1108 02:31:46.634314  111868 pv_controller.go:573] volume "pv-i-canbind-2" is released and reclaim policy "Retain" will be executed
I1108 02:31:46.634330  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Released
I1108 02:31:46.636597  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (11.422812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.637173  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" deleted
I1108 02:31:46.637480  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (2.824115ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:46.637696  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32502
I1108 02:31:46.637720  111868 pv_controller.go:796] volume "pv-i-canbind-2" entered phase "Released"
I1108 02:31:46.637729  111868 pv_controller.go:1009] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I1108 02:31:46.637748  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32352
I1108 02:31:46.637769  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 (uid: 09f29b18-28b9-44af-9d65-acd58826b506)", boundByController: true
I1108 02:31:46.637781  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4
I1108 02:31:46.641401  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-4: (3.371037ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.641778  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 not found
I1108 02:31:46.641810  111868 pv_controller.go:573] volume "pv-w-canbind-4" is released and reclaim policy "Retain" will be executed
I1108 02:31:46.641825  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Released
I1108 02:31:46.645056  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (2.788121ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.645351  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32504
I1108 02:31:46.645389  111868 pv_controller.go:796] volume "pv-w-canbind-4" entered phase "Released"
I1108 02:31:46.645399  111868 pv_controller.go:1009] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1108 02:31:46.645425  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 32502
I1108 02:31:46.645448  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Released, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 (uid: 32a5722d-6eb0-436d-a595-28d6eb1d6284)", boundByController: true
I1108 02:31:46.645459  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2
I1108 02:31:46.645475  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2 not found
I1108 02:31:46.645480  111868 pv_controller.go:1009] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I1108 02:31:46.645756  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 32504
I1108 02:31:46.645800  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Released, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 (uid: 09f29b18-28b9-44af-9d65-acd58826b506)", boundByController: true
I1108 02:31:46.645812  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4
I1108 02:31:46.645833  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4 not found
I1108 02:31:46.645858  111868 pv_controller.go:1009] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1108 02:31:46.646066  111868 pv_controller_base.go:216] volume "pv-i-canbind-2" deleted
I1108 02:31:46.646110  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind-2" was already processed
I1108 02:31:46.649559  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (11.458468ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:46.650246  111868 pv_controller_base.go:216] volume "pv-w-canbind-4" deleted
I1108 02:31:46.650284  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-4" was already processed
I1108 02:31:46.668033  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (17.701162ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:46.668353  111868 volume_binding_test.go:191] Running test immediate pvc prebound
I1108 02:31:46.670939  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.257911ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:46.672989  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.633013ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:46.675195  111868 httplog.go:90] POST /api/v1/persistentvolumes: (1.741593ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:46.676493  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-pvc-prebound", version 32514
I1108 02:31:46.676559  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1108 02:31:46.676584  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1108 02:31:46.676592  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1108 02:31:46.677515  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (1.845417ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:46.677989  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound", version 32515
I1108 02:31:46.678030  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:31:46.678045  111868 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1108 02:31:46.678065  111868 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I1108 02:31:46.678082  111868 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume is unbound, binding
I1108 02:31:46.678102  111868 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:31:46.678114  111868 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:31:46.678139  111868 pv_controller.go:847] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I1108 02:31:46.679719  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.789948ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.680072  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32516
I1108 02:31:46.680103  111868 pv_controller.go:796] volume "pv-i-pvc-prebound" entered phase "Available"
I1108 02:31:46.680132  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32516
I1108 02:31:46.680152  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1108 02:31:46.680171  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1108 02:31:46.680177  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1108 02:31:46.680234  111868 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I1108 02:31:46.680628  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (2.462708ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I1108 02:31:46.680871  111868 store.go:365] GuaranteedUpdate of /89d04f6a-c49a-49bf-9f59-18031bc0a51b/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I1108 02:31:46.681094  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (2.449615ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
I1108 02:31:46.681159  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
I1108 02:31:46.681183  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
I1108 02:31:46.681340  111868 pv_controller.go:850] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
E1108 02:31:46.681438  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:31:46.681385  111868 pv_controller.go:932] error binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:31:46.681504  111868 pv_controller_base.go:251] could not sync claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
E1108 02:31:46.681587  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1108 02:31:46.681646  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:31:46.681681  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1108 02:31:46.684000  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.44235ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:31:46.684543  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (1.994936ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:46.687431  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound/status: (5.289839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34172]
E1108 02:31:46.687797  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:31:46.783581  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.614246ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:46.884609  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.440179ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:46.983897  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.745179ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.084230  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.179592ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.185947  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.898412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.284312  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.292951ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.384191  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.058725ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.489262  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (6.208322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.584748  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.201472ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.683796  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.879345ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.786824  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.238536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.884349  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.192238ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:47.984198  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.994546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.084109  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.021618ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.183943  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.725489ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.284357  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.2427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.384205  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.128845ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.484234  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.994615ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.584939  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.227516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.684528  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.468036ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.803707  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (18.706823ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.886994  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.692472ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:48.988244  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.162014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.084683  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.549866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.185076  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.957116ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.284562  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.525485ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.384912  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.647667ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.484087  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.008949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.585049  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.655995ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.684008  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.004283ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.785043  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.958395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.886812  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.759136ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:49.984492  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.47199ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.083884  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.727775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.184436  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.321389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.284291  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.240217ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.384230  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.167933ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.484077  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.008827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.584331  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.273813ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.684477  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.44424ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.783826  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.887362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.883892  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.83835ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:50.984512  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.311444ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.084453  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.405672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.184072  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.921405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.283919  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.910812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.384091  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.036847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.483724  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.730647ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.585176  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.977818ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.685603  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.23793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.786330  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.208481ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.884632  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.767865ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:51.983789  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.620498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.083992  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.915367ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.183650  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.630313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.284002  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.98136ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.383766  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.759567ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.489446  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (6.332412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.584523  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.413114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.684015  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.037676ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.784029  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.929788ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.884704  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.647645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:52.984272  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.198467ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.084458  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.313798ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.185389  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.96769ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.285691  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.65357ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.383763  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.701533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.485652  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.378715ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.587325  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.244573ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.695402  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (13.365816ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.786610  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.529756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.884393  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.340397ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:53.984161  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.095446ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.084703  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.684763ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.184099  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.017388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.288221  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.363594ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.387411  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (5.311875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.484655  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.203797ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.584793  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.148484ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.684005  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.990945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.784369  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.266321ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.884052  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.900577ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:54.984370  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.287602ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.084214  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.855424ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.129763  111868 httplog.go:90] GET /api/v1/namespaces/default: (2.048386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.132034  111868 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.782503ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.133895  111868 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.494872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.183924  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.884343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.285534  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.450597ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.384002  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.924518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.488392  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (6.35765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.584625  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.004529ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.687195  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.808699ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.784028  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.995621ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.885961  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.436551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:55.984009  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.942586ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.084329  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.269796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.184111  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.040821ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.284139  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.07489ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.384214  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.12029ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.485817  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.751114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.585451  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.42847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.691163  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (8.454086ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.784068  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.028678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.883932  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.878486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:56.984140  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.073601ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.084181  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.107301ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.184745  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.685292ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.284233  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.150314ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.384497  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.441593ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.484500  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.360963ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.584479  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.457591ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.684704  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.631026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.784193  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.115118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.883869  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.814862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:57.984161  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.064349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.083990  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.849656ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.183511  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.504093ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.283988  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.850778ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.384249  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.150212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.484323  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.228673ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.584096  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.063134ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.684228  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.152902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.784104  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.864808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.884323  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.206943ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:58.984385  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.267548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.083676  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.630949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.184031  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.965788ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.283942  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.831763ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.383964  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.923938ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.484334  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.16838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.583966  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.926426ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.683957  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.924981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.783998  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.972103ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.884102  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.009457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:31:59.983995  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.834321ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:32:00.083788  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.737815ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:32:00.183960  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.93803ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:32:00.283803  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.83068ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:32:00.384068  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.005513ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:32:00.409345  111868 pv_controller_base.go:426] resyncing PV controller
I1108 02:32:00.409459  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32516
I1108 02:32:00.409502  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1108 02:32:00.409522  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1108 02:32:00.409499  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" with version 32515
I1108 02:32:00.409530  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1108 02:32:00.409554  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:32:00.409558  111868 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I1108 02:32:00.409575  111868 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1108 02:32:00.409606  111868 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1108 02:32:00.409631  111868 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume is unbound, binding
I1108 02:32:00.409653  111868 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:00.409664  111868 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:00.409708  111868 pv_controller.go:847] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I1108 02:32:00.416970  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (6.726796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:32:00.417294  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 33860
I1108 02:32:00.417328  111868 pv_controller.go:860] updating PersistentVolume[pv-i-pvc-prebound]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:00.417340  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1108 02:32:00.417339  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 33860
I1108 02:32:00.417356  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
I1108 02:32:00.417374  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
I1108 02:32:00.417384  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound (uid: 45136918-8dd7-4124-99e9-417efabb1d18)", boundByController: true
I1108 02:32:00.417397  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound
I1108 02:32:00.417439  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:32:00.417459  111868 pv_controller.go:617] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1108 02:32:00.417468  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
E1108 02:32:00.417566  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:00.417623  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:00.417664  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1108 02:32:00.417687  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:00.419747  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.402212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:00.419797  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (1.967154ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:00.420250  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 33862
I1108 02:32:00.420289  111868 pv_controller.go:796] volume "pv-i-pvc-prebound" entered phase "Bound"
I1108 02:32:00.420376  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 33862
I1108 02:32:00.420418  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound (uid: 45136918-8dd7-4124-99e9-417efabb1d18)", boundByController: true
I1108 02:32:00.420432  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound
I1108 02:32:00.420459  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:32:00.420474  111868 pv_controller.go:617] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1108 02:32:00.420483  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1108 02:32:00.420492  111868 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1108 02:32:00.420929  111868 store.go:365] GuaranteedUpdate of /89d04f6a-c49a-49bf-9f59-18031bc0a51b/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I1108 02:32:00.421220  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (3.619659ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34174]
I1108 02:32:00.421349  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.998305ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37858]
I1108 02:32:00.421525  111868 pv_controller.go:788] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:00.421556  111868 pv_controller.go:938] error binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound": failed saving the volume status: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:00.421576  111868 pv_controller_base.go:251] could not sync claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:00.485370  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.27686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:00.583799  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.817824ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:00.683778  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.753434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:00.785280  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.928424ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:00.885662  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.96374ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:00.983995  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.844989ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.084091  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.895002ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.184350  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.278087ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.284375  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.184771ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.384403  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.304074ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.484336  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.227611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.584279  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.21135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.684399  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.272019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.783884  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.752248ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.883817  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.789112ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:01.984011  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.935963ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:02.084215  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.177109ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:02.186172  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.022032ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:02.209797  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
I1108 02:32:02.209866  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
E1108 02:32:02.210096  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:02.210155  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:02.210184  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1108 02:32:02.210205  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:02.213952  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.750759ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:02.215247  111868 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events/pod-i-pvc-prebound.15d50f2a96211f32: (3.967912ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:02.286920  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.808818ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:02.387181  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.972422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:02.483996  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.90868ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:02.584330  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.156932ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:02.684091  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.026833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:02.784525  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.404199ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:02.884296  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.096079ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:02.983950  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.925317ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.084167  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.0886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.184360  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.247691ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.284351  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.282608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.384222  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.100586ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.484039  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.968546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.583732  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.698446ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.684254  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.153897ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.784474  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.389567ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.884443  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.269102ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:03.984432  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.192409ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.084182  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.071559ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.184504  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.279053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.284514  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.383118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.385288  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.144765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.484677  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.540743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.583934  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.905306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.684116  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.072848ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.784205  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.114524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.884181  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.112173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:04.984115  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.000849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.084079  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.964351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.129494  111868 httplog.go:90] GET /api/v1/namespaces/default: (1.721156ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.131541  111868 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.512113ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.133248  111868 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.209663ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.184210  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.127727ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.284233  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.157096ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.384567  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.406145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.484364  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.204278ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.584242  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.111627ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.684239  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.153854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.784196  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.107127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.884221  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.071341ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:05.985759  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.654723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.084155  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.04172ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.184347  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.246225ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.284241  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.105046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.384334  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.226804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.484376  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.090111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.584249  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.172101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.684290  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.196565ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.784161  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.045606ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.884614  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.517268ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:06.983833  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.746005ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.084443  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.244067ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.184298  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.154908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.284194  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.050981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.384135  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.015615ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.484267  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.192709ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.584133  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.084687ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.684067  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.97774ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.784372  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.324551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.884245  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.173121ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:07.984240  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.074702ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.085054  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.873036ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.183740  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.785162ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.283820  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.779347ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.383795  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.774556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.483723  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.687873ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.584100  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.059585ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.683967  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.982044ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.783827  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.867545ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.883884  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.801537ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:08.983900  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.79878ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.084626  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.479965ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.183931  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.891902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.284116  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.10595ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.383968  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.864631ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.484048  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.970906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.584250  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.22582ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.685058  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.974475ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.784077  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.050841ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.883964  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.867141ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:09.984243  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.129152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.086716  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.592324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.184303  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.167728ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.284103  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.976539ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.387573  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (5.379612ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.484107  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.960448ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.583964  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.910976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.684462  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.29935ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.784151  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.119891ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.885166  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.851042ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:10.986569  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.466015ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.084743  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.630098ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.184289  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.203937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.284123  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.99639ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.383985  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.918203ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.485555  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.263201ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.586167  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.972729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.683684  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.69669ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.792207  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.440942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.884034  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.777945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:11.984101  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.982611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.083961  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.832796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.184231  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.146679ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.285517  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.175146ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.383904  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.803966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.483907  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.801433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.585005  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.97607ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.684281  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.089683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.785355  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.249812ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.884906  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.775119ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:12.984281  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.26075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.084721  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.561151ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.184824  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.606793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.284308  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.20115ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.386878  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.782152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.485336  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.166991ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.585054  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.984178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.685301  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.295395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.783951  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.885832ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.884300  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.205719ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:13.984336  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.172444ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.084251  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.233923ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.187566  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (5.371793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.288487  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.154389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.384243  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.115977ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.486693  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.198279ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.586099  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.006553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.685642  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.720595ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.785666  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.132893ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.884119  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.062011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:14.983920  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.846436ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.084717  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.593677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.133370  111868 httplog.go:90] GET /api/v1/namespaces/default: (5.376178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.135667  111868 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.720728ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.138261  111868 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.119202ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.184784  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.734468ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.283901  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.750539ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.383979  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.898389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.410924  111868 pv_controller_base.go:426] resyncing PV controller
I1108 02:32:15.411031  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 33862
I1108 02:32:15.411077  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound (uid: 45136918-8dd7-4124-99e9-417efabb1d18)", boundByController: true
I1108 02:32:15.411089  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound
I1108 02:32:15.411112  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:32:15.411143  111868 pv_controller.go:617] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1108 02:32:15.411157  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1108 02:32:15.411176  111868 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1108 02:32:15.411202  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" with version 32515
I1108 02:32:15.411220  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:32:15.411235  111868 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1108 02:32:15.411255  111868 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound (uid: 45136918-8dd7-4124-99e9-417efabb1d18)", boundByController: true
I1108 02:32:15.411270  111868 pv_controller.go:388] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume already bound, finishing the binding
I1108 02:32:15.411280  111868 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:15.411289  111868 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:15.411319  111868 pv_controller.go:839] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:15.411331  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1108 02:32:15.411340  111868 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1108 02:32:15.411350  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1108 02:32:15.411366  111868 pv_controller.go:899] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:15.414757  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-prebound: (2.868499ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.415729  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
I1108 02:32:15.415774  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
E1108 02:32:15.416005  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:15.416291  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:15.416339  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1108 02:32:15.416359  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:15.416674  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" with version 34926
I1108 02:32:15.416710  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: bound to "pv-i-pvc-prebound"
I1108 02:32:15.416730  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound] status: set phase Bound
I1108 02:32:15.418519  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.735247ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:15.420662  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-prebound/status: (3.571756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:15.421214  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" with version 34928
I1108 02:32:15.421275  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" entered phase "Bound"
I1108 02:32:15.421296  111868 pv_controller.go:955] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:15.421322  111868 pv_controller.go:956] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound (uid: 45136918-8dd7-4124-99e9-417efabb1d18)", boundByController: true
I1108 02:32:15.421339  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1108 02:32:15.421378  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" with version 34928
I1108 02:32:15.421396  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1108 02:32:15.421414  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: volume "pv-i-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound (uid: 45136918-8dd7-4124-99e9-417efabb1d18)", boundByController: true
I1108 02:32:15.421424  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: claim is already correctly bound
I1108 02:32:15.421433  111868 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:15.421441  111868 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:15.421456  111868 pv_controller.go:839] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:15.421464  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1108 02:32:15.421470  111868 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1108 02:32:15.421477  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1108 02:32:15.421489  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound]: already bound to "pv-i-pvc-prebound"
I1108 02:32:15.421495  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound] status: set phase Bound
I1108 02:32:15.421521  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound] status: phase Bound already set
I1108 02:32:15.421541  111868 pv_controller.go:955] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound"
I1108 02:32:15.421575  111868 pv_controller.go:956] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound (uid: 45136918-8dd7-4124-99e9-417efabb1d18)", boundByController: true
I1108 02:32:15.421590  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1108 02:32:15.483765  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.720799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:15.583800  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.746903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:15.684661  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.486396ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:15.784049  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.944494ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:15.884051  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.977077ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:15.984207  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.152479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.084181  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.112184ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.183871  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (1.809896ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.289010  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (6.935941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.386429  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (4.325961ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.485468  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (3.365628ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.589470  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (7.433614ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.684233  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (2.142579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.691020  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pvc-prebound: (6.103567ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.693915  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-prebound: (2.199406ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.700154  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-pvc-prebound: (5.63833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.713928  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
I1108 02:32:16.713980  111868 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pvc-prebound
I1108 02:32:16.720658  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (6.236889ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.723687  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (22.762336ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.735185  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (10.969744ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.735646  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" deleted
I1108 02:32:16.735707  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 33862
I1108 02:32:16.735743  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound (uid: 45136918-8dd7-4124-99e9-417efabb1d18)", boundByController: true
I1108 02:32:16.735758  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound
I1108 02:32:16.740200  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-prebound: (4.178793ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.740563  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound not found
I1108 02:32:16.740595  111868 pv_controller.go:573] volume "pv-i-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1108 02:32:16.740610  111868 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released
I1108 02:32:16.749728  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (8.749971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.750023  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 35274
I1108 02:32:16.750055  111868 pv_controller.go:796] volume "pv-i-pvc-prebound" entered phase "Released"
I1108 02:32:16.750067  111868 pv_controller.go:1009] reclaimVolume[pv-i-pvc-prebound]: policy is Retain, nothing to do
I1108 02:32:16.750092  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 35274
I1108 02:32:16.750118  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Released, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound (uid: 45136918-8dd7-4124-99e9-417efabb1d18)", boundByController: true
I1108 02:32:16.750136  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound
I1108 02:32:16.750163  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound not found
I1108 02:32:16.750169  111868 pv_controller.go:1009] reclaimVolume[pv-i-pvc-prebound]: policy is Retain, nothing to do
I1108 02:32:16.756087  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (20.057965ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.756714  111868 pv_controller_base.go:216] volume "pv-i-pvc-prebound" deleted
I1108 02:32:16.756788  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-prebound" was already processed
I1108 02:32:16.767629  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (11.057343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.770048  111868 volume_binding_test.go:191] Running test immediate pv prebound
I1108 02:32:16.774802  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (4.372396ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.777776  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.460224ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.781282  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-prebound", version 35288
I1108 02:32:16.781348  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: )", boundByController: false
I1108 02:32:16.781358  111868 pv_controller.go:504] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound
I1108 02:32:16.781366  111868 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Available
I1108 02:32:16.781760  111868 httplog.go:90] POST /api/v1/persistentvolumes: (3.406193ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.784610  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.890048ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.784912  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35291
I1108 02:32:16.784940  111868 pv_controller.go:796] volume "pv-i-prebound" entered phase "Available"
I1108 02:32:16.784969  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35291
I1108 02:32:16.784993  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: )", boundByController: false
I1108 02:32:16.785001  111868 pv_controller.go:504] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound
I1108 02:32:16.785007  111868 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Available
I1108 02:32:16.785016  111868 pv_controller.go:778] updating PersistentVolume[pv-i-prebound]: phase Available already set
I1108 02:32:16.786150  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (3.804011ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.786348  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound", version 35292
I1108 02:32:16.786375  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:16.786411  111868 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: )", boundByController: false
I1108 02:32:16.786425  111868 pv_controller.go:929] binding volume "pv-i-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound"
I1108 02:32:16.786442  111868 pv_controller.go:827] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound"
I1108 02:32:16.786478  111868 pv_controller.go:847] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I1108 02:32:16.789802  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (3.076439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.790286  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35293
I1108 02:32:16.790322  111868 pv_controller.go:860] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound"
I1108 02:32:16.790335  111868 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1108 02:32:16.790384  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35293
I1108 02:32:16.790424  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: a2c0ed62-c71f-41e9-9139-99599eab7d9c)", boundByController: false
I1108 02:32:16.790437  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound
I1108 02:32:16.790480  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:16.790501  111868 pv_controller.go:604] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1108 02:32:16.795515  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.832974ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.795902  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35296
I1108 02:32:16.795931  111868 pv_controller.go:796] volume "pv-i-prebound" entered phase "Bound"
I1108 02:32:16.795947  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1108 02:32:16.795963  111868 pv_controller.go:899] volume "pv-i-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound"
I1108 02:32:16.796193  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35296
I1108 02:32:16.796235  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: a2c0ed62-c71f-41e9-9139-99599eab7d9c)", boundByController: false
I1108 02:32:16.796250  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound
I1108 02:32:16.796276  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:16.796295  111868 pv_controller.go:604] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1108 02:32:16.798797  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound
I1108 02:32:16.798823  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound
E1108 02:32:16.799021  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:16.799060  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:16.799093  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I1108 02:32:16.801713  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (13.626043ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.804478  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-pv-prebound: (8.212004ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.805248  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound" with version 35299
I1108 02:32:16.805327  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I1108 02:32:16.805345  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound] status: set phase Bound
I1108 02:32:16.805443  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (4.196819ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I1108 02:32:16.805787  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pv-prebound: (5.347583ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41606]
I1108 02:32:16.806186  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pv-prebound/status: (6.346799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41604]
E1108 02:32:16.806466  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:16.806594  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound
I1108 02:32:16.806609  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound
I1108 02:32:16.806812  111868 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound" match with Node "node-1"
I1108 02:32:16.806909  111868 scheduler_binder.go:653] PersistentVolume "pv-i-prebound", Node "node-2" mismatch for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound": No matching NodeSelectorTerms
I1108 02:32:16.807019  111868 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound", node "node-1"
I1108 02:32:16.807040  111868 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound", node "node-1": all PVCs bound and nothing to do
I1108 02:32:16.807129  111868 factory.go:698] Attempting to bind pod-i-pv-prebound to node-1
I1108 02:32:16.812180  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-pv-prebound/status: (6.577942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.813073  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pv-prebound/binding: (5.333332ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I1108 02:32:16.813482  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound" with version 35304
I1108 02:32:16.813507  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound" entered phase "Bound"
I1108 02:32:16.813526  111868 pv_controller.go:955] volume "pv-i-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound"
I1108 02:32:16.813552  111868 pv_controller.go:956] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: a2c0ed62-c71f-41e9-9139-99599eab7d9c)", boundByController: false
I1108 02:32:16.813568  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1108 02:32:16.813613  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound" with version 35304
I1108 02:32:16.813640  111868 scheduler.go:756] pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1108 02:32:16.813640  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1108 02:32:16.813707  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: a2c0ed62-c71f-41e9-9139-99599eab7d9c)", boundByController: false
I1108 02:32:16.813726  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound]: claim is already correctly bound
I1108 02:32:16.813735  111868 pv_controller.go:929] binding volume "pv-i-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound"
I1108 02:32:16.813771  111868 pv_controller.go:827] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound"
I1108 02:32:16.813799  111868 pv_controller.go:839] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound"
I1108 02:32:16.813813  111868 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1108 02:32:16.813822  111868 pv_controller.go:778] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I1108 02:32:16.813878  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1108 02:32:16.813901  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I1108 02:32:16.813910  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound] status: set phase Bound
I1108 02:32:16.813928  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound] status: phase Bound already set
I1108 02:32:16.813938  111868 pv_controller.go:955] volume "pv-i-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound"
I1108 02:32:16.813976  111868 pv_controller.go:956] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: a2c0ed62-c71f-41e9-9139-99599eab7d9c)", boundByController: false
I1108 02:32:16.813992  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1108 02:32:16.817430  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (3.295981ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.908887  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-pv-prebound: (6.185394ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.911102  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-pv-prebound: (1.588796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.913090  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.446377ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.924830  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (10.819602ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.932235  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (6.882213ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.933563  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound" deleted
I1108 02:32:16.933705  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35296
I1108 02:32:16.933799  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: a2c0ed62-c71f-41e9-9139-99599eab7d9c)", boundByController: false
I1108 02:32:16.933867  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound
I1108 02:32:16.933926  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound not found
I1108 02:32:16.933994  111868 pv_controller.go:573] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I1108 02:32:16.934090  111868 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Released
I1108 02:32:16.938052  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.601122ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.938786  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35344
I1108 02:32:16.938815  111868 pv_controller.go:796] volume "pv-i-prebound" entered phase "Released"
I1108 02:32:16.938828  111868 pv_controller.go:1009] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1108 02:32:16.939072  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 35344
I1108 02:32:16.939118  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Released, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound (uid: a2c0ed62-c71f-41e9-9139-99599eab7d9c)", boundByController: false
I1108 02:32:16.939130  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound
I1108 02:32:16.939150  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound not found
I1108 02:32:16.939158  111868 pv_controller.go:1009] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1108 02:32:16.944677  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (11.332517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.945614  111868 pv_controller_base.go:216] volume "pv-i-prebound" deleted
I1108 02:32:16.945655  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-pv-prebound" was already processed
I1108 02:32:16.961389  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (15.644306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.961655  111868 volume_binding_test.go:191] Running test wait cannot bind
I1108 02:32:16.963948  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.015282ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.967175  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.797989ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.970199  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (2.420999ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.971174  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind", version 35359
I1108 02:32:16.971209  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:16.971248  111868 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind]: no volume found
I1108 02:32:16.971280  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind] status: set phase Pending
I1108 02:32:16.971296  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind] status: phase Pending already set
I1108 02:32:16.971550  111868 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7", Name:"pvc-w-cannotbind", UID:"1571bc13-b503-4cd3-8b0e-5a9c51409ddc", APIVersion:"v1", ResourceVersion:"35359", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1108 02:32:16.976626  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (4.784291ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.977763  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (4.103769ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.978278  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind
I1108 02:32:16.978306  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind
I1108 02:32:16.978506  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind" on node "node-2"
I1108 02:32:16.978509  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind" on node "node-1"
I1108 02:32:16.978530  111868 scheduler_binder.go:725] storage class "wait-g7fv" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind" does not support dynamic provisioning
I1108 02:32:16.978545  111868 scheduler_binder.go:725] storage class "wait-g7fv" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind" does not support dynamic provisioning
I1108 02:32:16.978627  111868 factory.go:632] Unable to schedule volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1108 02:32:16.979334  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1108 02:32:16.983566  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (3.004104ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:16.983960  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind/status: (4.153609ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.985675  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind: (5.620688ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
E1108 02:32:16.986080  111868 factory.go:673] pod is already present in the activeQ
I1108 02:32:16.986571  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind: (1.9703ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1108 02:32:16.987038  111868 generic_scheduler.go:341] Preemption will not help schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind on any node.
I1108 02:32:16.987293  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind
I1108 02:32:16.987417  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind
I1108 02:32:16.987677  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind" on node "node-1"
I1108 02:32:16.987880  111868 scheduler_binder.go:725] storage class "wait-g7fv" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind" does not support dynamic provisioning
I1108 02:32:16.988042  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind" on node "node-2"
I1108 02:32:16.988133  111868 scheduler_binder.go:725] storage class "wait-g7fv" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind" does not support dynamic provisioning
I1108 02:32:16.988265  111868 factory.go:632] Unable to schedule volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1108 02:32:16.988394  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1108 02:32:16.990975  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind: (1.796128ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:16.996274  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind: (4.600882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:16.996868  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (6.46269ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41700]
I1108 02:32:16.997312  111868 generic_scheduler.go:341] Preemption will not help schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind on any node.
I1108 02:32:17.081322  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind: (2.23793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.085401  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-cannotbind: (3.257381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.091037  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind
I1108 02:32:17.091082  111868 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind
I1108 02:32:17.094131  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.46425ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:17.104966  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (18.970377ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.112401  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (6.880738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.113449  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind" deleted
I1108 02:32:17.118478  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (4.969172ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.127685  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.398058ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.128287  111868 volume_binding_test.go:191] Running test wait pv prebound
I1108 02:32:17.130721  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.087973ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.133065  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.893618ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.135645  111868 httplog.go:90] POST /api/v1/persistentvolumes: (1.987794ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.136323  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-prebound", version 35409
I1108 02:32:17.136354  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: )", boundByController: false
I1108 02:32:17.136359  111868 pv_controller.go:504] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound
I1108 02:32:17.136366  111868 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Available
I1108 02:32:17.140449  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (4.229044ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.141116  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound", version 35410
I1108 02:32:17.141154  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:17.141198  111868 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Pending, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: )", boundByController: false
I1108 02:32:17.141211  111868 pv_controller.go:929] binding volume "pv-w-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.141235  111868 pv_controller.go:827] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.141256  111868 pv_controller.go:847] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1108 02:32:17.141300  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (4.542508ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:17.141559  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35411
I1108 02:32:17.141583  111868 pv_controller.go:796] volume "pv-w-prebound" entered phase "Available"
I1108 02:32:17.142013  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35411
I1108 02:32:17.142045  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: )", boundByController: false
I1108 02:32:17.142053  111868 pv_controller.go:504] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound
I1108 02:32:17.142065  111868 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Available
I1108 02:32:17.142075  111868 pv_controller.go:778] updating PersistentVolume[pv-w-prebound]: phase Available already set
I1108 02:32:17.144342  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.621884ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I1108 02:32:17.144622  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (2.407548ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.145685  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pv-prebound
I1108 02:32:17.145708  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pv-prebound
I1108 02:32:17.145912  111868 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pv-prebound" on node "node-1"
I1108 02:32:17.145979  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pv-prebound", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" on node "node-2"
I1108 02:32:17.145994  111868 scheduler_binder.go:725] storage class "wait-wxfp" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" does not support dynamic provisioning
I1108 02:32:17.146037  111868 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pv-prebound", node "node-1"
I1108 02:32:17.146064  111868 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-prebound", version 35411
I1108 02:32:17.146195  111868 pv_controller.go:850] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:17.146248  111868 pv_controller.go:932] error binding volume "pv-w-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:17.146273  111868 pv_controller_base.go:251] could not sync claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:17.146357  111868 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pv-prebound", node "node-1"
I1108 02:32:17.146414  111868 scheduler_binder.go:404] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1108 02:32:17.149197  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.38419ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.149798  111868 scheduler_binder.go:410] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.150262  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35414
I1108 02:32:17.150294  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: a9f200b8-70fa-4e0f-a998-22e7fc58e872)", boundByController: false
I1108 02:32:17.150304  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound
I1108 02:32:17.150324  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:17.150337  111868 pv_controller.go:604] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1108 02:32:17.150358  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" with version 35410
I1108 02:32:17.150367  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:17.150388  111868 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: a9f200b8-70fa-4e0f-a998-22e7fc58e872)", boundByController: false
I1108 02:32:17.150397  111868 pv_controller.go:929] binding volume "pv-w-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.150406  111868 pv_controller.go:827] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.150422  111868 pv_controller.go:839] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.150429  111868 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1108 02:32:17.153551  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.862776ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.153821  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35415
I1108 02:32:17.153862  111868 pv_controller.go:796] volume "pv-w-prebound" entered phase "Bound"
I1108 02:32:17.153886  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1108 02:32:17.153902  111868 pv_controller.go:899] volume "pv-w-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.153908  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35415
I1108 02:32:17.153956  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: a9f200b8-70fa-4e0f-a998-22e7fc58e872)", boundByController: false
I1108 02:32:17.153982  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound
I1108 02:32:17.154001  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:17.154017  111868 pv_controller.go:604] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1108 02:32:17.156629  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-pv-prebound: (2.432937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.157450  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" with version 35416
I1108 02:32:17.157630  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I1108 02:32:17.157721  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound] status: set phase Bound
I1108 02:32:17.160373  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-pv-prebound/status: (2.241775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.161399  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" with version 35418
I1108 02:32:17.161435  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" entered phase "Bound"
I1108 02:32:17.161453  111868 pv_controller.go:955] volume "pv-w-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.161479  111868 pv_controller.go:956] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: a9f200b8-70fa-4e0f-a998-22e7fc58e872)", boundByController: false
I1108 02:32:17.161496  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1108 02:32:17.161543  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" with version 35418
I1108 02:32:17.161555  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1108 02:32:17.161581  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: a9f200b8-70fa-4e0f-a998-22e7fc58e872)", boundByController: false
I1108 02:32:17.161591  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: claim is already correctly bound
I1108 02:32:17.161599  111868 pv_controller.go:929] binding volume "pv-w-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.161609  111868 pv_controller.go:827] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.161631  111868 pv_controller.go:839] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.161641  111868 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1108 02:32:17.161651  111868 pv_controller.go:778] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I1108 02:32:17.161661  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1108 02:32:17.161695  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I1108 02:32:17.161718  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound] status: set phase Bound
I1108 02:32:17.161737  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound] status: phase Bound already set
I1108 02:32:17.161748  111868 pv_controller.go:955] volume "pv-w-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound"
I1108 02:32:17.161767  111868 pv_controller.go:956] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: a9f200b8-70fa-4e0f-a998-22e7fc58e872)", boundByController: false
I1108 02:32:17.161780  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1108 02:32:17.212766  111868 cache.go:656] Couldn't expire cache for pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pv-prebound. Binding is still in progress.
I1108 02:32:17.248365  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (1.850066ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.348568  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (2.00668ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.448587  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (1.993148ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.549684  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (3.247429ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.648811  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (2.362462ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.748078  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (1.68295ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.848433  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (1.796292ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:17.950059  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (3.564517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.048287  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (1.840744ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.148294  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (1.8122ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.150103  111868 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pv-prebound" are bound
I1108 02:32:18.150208  111868 factory.go:698] Attempting to bind pod-w-pv-prebound to node-1
I1108 02:32:18.153197  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound/binding: (2.520243ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.153451  111868 scheduler.go:756] pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1108 02:32:18.158398  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (4.552171ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.248924  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pv-prebound: (2.447253ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.251895  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-pv-prebound: (2.330031ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.253682  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (1.350863ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.268468  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (14.322799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.276110  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (4.860552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.276445  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" deleted
I1108 02:32:18.276490  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35415
I1108 02:32:18.276524  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: a9f200b8-70fa-4e0f-a998-22e7fc58e872)", boundByController: false
I1108 02:32:18.276542  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound
I1108 02:32:18.276586  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound not found
I1108 02:32:18.276601  111868 pv_controller.go:573] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I1108 02:32:18.276615  111868 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Released
I1108 02:32:18.278987  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.998234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.279499  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35509
I1108 02:32:18.279525  111868 pv_controller.go:796] volume "pv-w-prebound" entered phase "Released"
I1108 02:32:18.279535  111868 pv_controller.go:1009] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1108 02:32:18.280184  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35509
I1108 02:32:18.280217  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Released, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound (uid: a9f200b8-70fa-4e0f-a998-22e7fc58e872)", boundByController: false
I1108 02:32:18.280226  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound
I1108 02:32:18.280241  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound not found
I1108 02:32:18.280248  111868 pv_controller.go:1009] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1108 02:32:18.282486  111868 pv_controller_base.go:216] volume "pv-w-prebound" deleted
I1108 02:32:18.282531  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-pv-prebound" was already processed
I1108 02:32:18.282805  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.283727ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.295528  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (12.300539ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.295798  111868 volume_binding_test.go:191] Running test wait can bind two
I1108 02:32:18.300498  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (4.347878ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.303212  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.164402ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.307232  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-2", version 35522
I1108 02:32:18.307353  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1108 02:32:18.307415  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1108 02:32:18.307473  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1108 02:32:18.307501  111868 httplog.go:90] POST /api/v1/persistentvolumes: (3.728007ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.310295  111868 httplog.go:90] POST /api/v1/persistentvolumes: (2.308117ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.313519  111868 httplog.go:90] POST /api/v1/persistentvolumes: (2.789283ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.313971  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (6.198417ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.314232  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35525
I1108 02:32:18.314260  111868 pv_controller.go:796] volume "pv-w-canbind-2" entered phase "Available"
I1108 02:32:18.314298  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-3", version 35524
I1108 02:32:18.314315  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Pending, bound to: "", boundByController: false
I1108 02:32:18.314347  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1108 02:32:18.314361  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1108 02:32:18.317410  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (2.834205ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.317885  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35528
I1108 02:32:18.317915  111868 pv_controller.go:796] volume "pv-w-canbind-3" entered phase "Available"
I1108 02:32:18.317943  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35525
I1108 02:32:18.317957  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "", boundByController: false
I1108 02:32:18.317972  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1108 02:32:18.317977  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1108 02:32:18.317983  111868 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-2]: phase Available already set
I1108 02:32:18.317998  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-5", version 35527
I1108 02:32:18.318006  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Pending, bound to: "", boundByController: false
I1108 02:32:18.318018  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1108 02:32:18.318022  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1108 02:32:18.319086  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (4.497057ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.319334  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2", version 35529
I1108 02:32:18.319375  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:18.319413  111868 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: no volume found
I1108 02:32:18.319439  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2] status: set phase Pending
I1108 02:32:18.319453  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2] status: phase Pending already set
I1108 02:32:18.319663  111868 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7", Name:"pvc-w-canbind-2", UID:"6ad17e43-93ca-42f8-8db8-78b61fd719df", APIVersion:"v1", ResourceVersion:"35529", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1108 02:32:18.322980  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (3.244927ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.323487  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3", version 35531
I1108 02:32:18.323526  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:18.323567  111868 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: no volume found
I1108 02:32:18.323592  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3] status: set phase Pending
I1108 02:32:18.323621  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3] status: phase Pending already set
I1108 02:32:18.323657  111868 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7", Name:"pvc-w-canbind-3", UID:"18f7fceb-c959-40e1-acba-8b657f68ecf3", APIVersion:"v1", ResourceVersion:"35531", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1108 02:32:18.326120  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (6.002134ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:18.327406  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-5/status: (8.626849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.327871  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 35533
I1108 02:32:18.328035  111868 pv_controller.go:796] volume "pv-w-canbind-5" entered phase "Available"
I1108 02:32:18.328177  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35528
I1108 02:32:18.328312  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "", boundByController: false
I1108 02:32:18.328412  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1108 02:32:18.328479  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1108 02:32:18.328547  111868 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-3]: phase Available already set
I1108 02:32:18.328630  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 35533
I1108 02:32:18.328715  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Available, bound to: "", boundByController: false
I1108 02:32:18.328786  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1108 02:32:18.328835  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1108 02:32:18.328927  111868 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-5]: phase Available already set
I1108 02:32:18.329796  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (4.825481ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.330603  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind-2
I1108 02:32:18.330624  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind-2
I1108 02:32:18.330937  111868 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind-2" on node "node-2"
I1108 02:32:18.330939  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind-2", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" on node "node-1"
I1108 02:32:18.330967  111868 scheduler_binder.go:725] storage class "wait-78wj" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" does not support dynamic provisioning
I1108 02:32:18.331028  111868 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind-2", node "node-2"
I1108 02:32:18.331063  111868 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-2", version 35525
I1108 02:32:18.331077  111868 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-3", version 35528
I1108 02:32:18.331138  111868 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind-2", node "node-2"
I1108 02:32:18.331152  111868 scheduler_binder.go:404] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" bound to volume "pv-w-canbind-2"
I1108 02:32:18.331563  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (4.142632ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:18.334182  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2: (2.702611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41698]
I1108 02:32:18.334364  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35538
I1108 02:32:18.334407  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 (uid: 6ad17e43-93ca-42f8-8db8-78b61fd719df)", boundByController: true
I1108 02:32:18.334420  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2
I1108 02:32:18.334445  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:18.334470  111868 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-2]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.334493  111868 scheduler_binder.go:404] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" bound to volume "pv-w-canbind-3"
I1108 02:32:18.334494  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:32:18.334537  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" with version 35529
I1108 02:32:18.334653  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:18.334723  111868 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: volume "pv-w-canbind-2" found: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 (uid: 6ad17e43-93ca-42f8-8db8-78b61fd719df)", boundByController: true
I1108 02:32:18.334791  111868 pv_controller.go:929] binding volume "pv-w-canbind-2" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.334870  111868 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.334947  111868 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.335014  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I1108 02:32:18.338306  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3: (3.065227ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:18.338546  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35540
I1108 02:32:18.338592  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 (uid: 18f7fceb-c959-40e1-acba-8b657f68ecf3)", boundByController: true
I1108 02:32:18.338603  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3
I1108 02:32:18.338600  111868 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-3]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.338616  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:18.338629  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:32:18.340425  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (5.010732ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.340917  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35541
I1108 02:32:18.341085  111868 pv_controller.go:796] volume "pv-w-canbind-2" entered phase "Bound"
I1108 02:32:18.341299  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: binding to "pv-w-canbind-2"
I1108 02:32:18.341625  111868 pv_controller.go:899] volume "pv-w-canbind-2" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.342925  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35541
I1108 02:32:18.342983  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 (uid: 6ad17e43-93ca-42f8-8db8-78b61fd719df)", boundByController: true
I1108 02:32:18.342995  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2
I1108 02:32:18.343031  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:18.343054  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:32:18.345634  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-2: (3.150276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.345942  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" with version 35543
I1108 02:32:18.345970  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: bound to "pv-w-canbind-2"
I1108 02:32:18.345983  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2] status: set phase Bound
I1108 02:32:18.352235  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-2/status: (5.950341ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.352604  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" with version 35545
I1108 02:32:18.352645  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" entered phase "Bound"
I1108 02:32:18.352719  111868 pv_controller.go:955] volume "pv-w-canbind-2" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.352761  111868 pv_controller.go:956] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 (uid: 6ad17e43-93ca-42f8-8db8-78b61fd719df)", boundByController: true
I1108 02:32:18.352779  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1108 02:32:18.352822  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" with version 35531
I1108 02:32:18.352856  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:18.352905  111868 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: volume "pv-w-canbind-3" found: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 (uid: 18f7fceb-c959-40e1-acba-8b657f68ecf3)", boundByController: true
I1108 02:32:18.352920  111868 pv_controller.go:929] binding volume "pv-w-canbind-3" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.352948  111868 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.352969  111868 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.352979  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I1108 02:32:18.358979  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (5.62888ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.359557  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35548
I1108 02:32:18.359592  111868 pv_controller.go:796] volume "pv-w-canbind-3" entered phase "Bound"
I1108 02:32:18.359608  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: binding to "pv-w-canbind-3"
I1108 02:32:18.359639  111868 pv_controller.go:899] volume "pv-w-canbind-3" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.359764  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35548
I1108 02:32:18.359812  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 (uid: 18f7fceb-c959-40e1-acba-8b657f68ecf3)", boundByController: true
I1108 02:32:18.359911  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3
I1108 02:32:18.359937  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:18.359965  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:32:18.366248  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-3: (5.793592ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.366868  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" with version 35550
I1108 02:32:18.366907  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: bound to "pv-w-canbind-3"
I1108 02:32:18.366919  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3] status: set phase Bound
I1108 02:32:18.376102  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-3/status: (8.841657ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.376425  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" with version 35552
I1108 02:32:18.376458  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" entered phase "Bound"
I1108 02:32:18.376472  111868 pv_controller.go:955] volume "pv-w-canbind-3" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.376507  111868 pv_controller.go:956] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 (uid: 18f7fceb-c959-40e1-acba-8b657f68ecf3)", boundByController: true
I1108 02:32:18.376519  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1108 02:32:18.376550  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" with version 35545
I1108 02:32:18.376578  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1108 02:32:18.376593  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: volume "pv-w-canbind-2" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 (uid: 6ad17e43-93ca-42f8-8db8-78b61fd719df)", boundByController: true
I1108 02:32:18.376601  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: claim is already correctly bound
I1108 02:32:18.376609  111868 pv_controller.go:929] binding volume "pv-w-canbind-2" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.376620  111868 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.376640  111868 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.376660  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I1108 02:32:18.376666  111868 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-2]: phase Bound already set
I1108 02:32:18.376674  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: binding to "pv-w-canbind-2"
I1108 02:32:18.376697  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2]: already bound to "pv-w-canbind-2"
I1108 02:32:18.376705  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2] status: set phase Bound
I1108 02:32:18.376734  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2] status: phase Bound already set
I1108 02:32:18.376745  111868 pv_controller.go:955] volume "pv-w-canbind-2" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2"
I1108 02:32:18.376757  111868 pv_controller.go:956] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 (uid: 6ad17e43-93ca-42f8-8db8-78b61fd719df)", boundByController: true
I1108 02:32:18.376766  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1108 02:32:18.376780  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" with version 35552
I1108 02:32:18.376791  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1108 02:32:18.376820  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: volume "pv-w-canbind-3" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 (uid: 18f7fceb-c959-40e1-acba-8b657f68ecf3)", boundByController: true
I1108 02:32:18.376827  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: claim is already correctly bound
I1108 02:32:18.376863  111868 pv_controller.go:929] binding volume "pv-w-canbind-3" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.376872  111868 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.376885  111868 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.376892  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I1108 02:32:18.376918  111868 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-3]: phase Bound already set
I1108 02:32:18.376926  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: binding to "pv-w-canbind-3"
I1108 02:32:18.376977  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3]: already bound to "pv-w-canbind-3"
I1108 02:32:18.377008  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3] status: set phase Bound
I1108 02:32:18.377028  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3] status: phase Bound already set
I1108 02:32:18.377065  111868 pv_controller.go:955] volume "pv-w-canbind-3" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3"
I1108 02:32:18.377142  111868 pv_controller.go:956] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 (uid: 18f7fceb-c959-40e1-acba-8b657f68ecf3)", boundByController: true
I1108 02:32:18.377163  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1108 02:32:18.433162  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (2.493534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.532400  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (1.695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.633352  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (2.670157ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.735418  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (4.731345ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.835612  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (4.882458ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:18.933322  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (2.657161ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.034653  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (3.999063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.133965  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (3.2653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.213388  111868 cache.go:656] Couldn't expire cache for pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind-2. Binding is still in progress.
I1108 02:32:19.232492  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (1.898765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.332936  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (2.266485ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.339011  111868 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind-2" are bound
I1108 02:32:19.339097  111868 factory.go:698] Attempting to bind pod-w-canbind-2 to node-2
I1108 02:32:19.345303  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2/binding: (5.572214ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.345705  111868 scheduler.go:756] pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind-2 is bound successfully on node "node-2", 2 nodes evaluated, 1 nodes were found feasible.
I1108 02:32:19.348299  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.055093ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.432611  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind-2: (1.961019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.435434  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-2: (2.127065ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.438443  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-3: (1.776886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.440652  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-2: (1.621155ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.443402  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-3: (2.011866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.446052  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-5: (1.889499ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.456230  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (9.579237ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.466096  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" deleted
I1108 02:32:19.466160  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35541
I1108 02:32:19.466199  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 (uid: 6ad17e43-93ca-42f8-8db8-78b61fd719df)", boundByController: true
I1108 02:32:19.466223  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2
I1108 02:32:19.468663  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-2: (2.123643ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.469032  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2 not found
I1108 02:32:19.469059  111868 pv_controller.go:573] volume "pv-w-canbind-2" is released and reclaim policy "Retain" will be executed
I1108 02:32:19.469071  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Released
I1108 02:32:19.470470  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (13.754988ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.470833  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" deleted
I1108 02:32:19.474470  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (5.037743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.474760  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35883
I1108 02:32:19.474815  111868 pv_controller.go:796] volume "pv-w-canbind-2" entered phase "Released"
I1108 02:32:19.474827  111868 pv_controller.go:1009] reclaimVolume[pv-w-canbind-2]: policy is Retain, nothing to do
I1108 02:32:19.474883  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35548
I1108 02:32:19.474912  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 (uid: 18f7fceb-c959-40e1-acba-8b657f68ecf3)", boundByController: true
I1108 02:32:19.475046  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3
I1108 02:32:19.477470  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind-3: (2.110494ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.478043  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3 not found
I1108 02:32:19.478107  111868 pv_controller.go:573] volume "pv-w-canbind-3" is released and reclaim policy "Retain" will be executed
I1108 02:32:19.478120  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Released
I1108 02:32:19.485530  111868 store.go:365] GuaranteedUpdate of /89d04f6a-c49a-49bf-9f59-18031bc0a51b/persistentvolumes/pv-w-canbind-3 failed because of a conflict, going to retry
I1108 02:32:19.485734  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (3.174929ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.486039  111868 pv_controller.go:788] updating PersistentVolume[pv-w-canbind-3]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-canbind-3": StorageError: invalid object, Code: 4, Key: /89d04f6a-c49a-49bf-9f59-18031bc0a51b/persistentvolumes/pv-w-canbind-3, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 53f7c74c-7713-42c7-a1aa-89dc6a4a510a, UID in object meta: 
I1108 02:32:19.486078  111868 pv_controller_base.go:204] could not sync volume "pv-w-canbind-3": Operation cannot be fulfilled on persistentvolumes "pv-w-canbind-3": StorageError: invalid object, Code: 4, Key: /89d04f6a-c49a-49bf-9f59-18031bc0a51b/persistentvolumes/pv-w-canbind-3, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 53f7c74c-7713-42c7-a1aa-89dc6a4a510a, UID in object meta: 
I1108 02:32:19.486144  111868 pv_controller_base.go:216] volume "pv-w-canbind-2" deleted
I1108 02:32:19.486173  111868 pv_controller_base.go:216] volume "pv-w-canbind-3" deleted
I1108 02:32:19.486200  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-2" was already processed
I1108 02:32:19.486218  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind-3" was already processed
I1108 02:32:19.489506  111868 pv_controller_base.go:216] volume "pv-w-canbind-5" deleted
I1108 02:32:19.489970  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (18.034141ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.551577  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (60.952537ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.551834  111868 volume_binding_test.go:191] Running test wait cannot bind two
I1108 02:32:19.554655  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.464261ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.559960  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (4.627826ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.565630  111868 httplog.go:90] POST /api/v1/persistentvolumes: (4.765193ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.567067  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-1", version 35897
I1108 02:32:19.567116  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Pending, bound to: "", boundByController: false
I1108 02:32:19.567137  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1108 02:32:19.567145  111868 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1108 02:32:19.570928  111868 httplog.go:90] POST /api/v1/persistentvolumes: (3.529477ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.576662  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-1/status: (7.960316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.576990  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 35901
I1108 02:32:19.577031  111868 pv_controller.go:796] volume "pv-w-cannotbind-1" entered phase "Available"
I1108 02:32:19.577076  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-2", version 35899
I1108 02:32:19.577097  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Pending, bound to: "", boundByController: false
I1108 02:32:19.577135  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1108 02:32:19.577156  111868 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1108 02:32:19.577286  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (5.504845ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.579887  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-1", version 35902
I1108 02:32:19.579921  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-1]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:19.579963  111868 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-1]: no volume found
I1108 02:32:19.579992  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-1] status: set phase Pending
I1108 02:32:19.580008  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-1] status: phase Pending already set
I1108 02:32:19.580042  111868 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7", Name:"pvc-w-cannotbind-1", UID:"409cfb33-a391-4c02-858f-0f47e8c8785f", APIVersion:"v1", ResourceVersion:"35902", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1108 02:32:19.580959  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (3.22344ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.581604  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2", version 35903
I1108 02:32:19.581641  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:19.581675  111868 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2]: no volume found
I1108 02:32:19.581704  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2] status: set phase Pending
I1108 02:32:19.581718  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2] status: phase Pending already set
I1108 02:32:19.581745  111868 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7", Name:"pvc-w-cannotbind-2", UID:"9586d1ea-38f5-4b44-b390-38ff5277a502", APIVersion:"v1", ResourceVersion:"35903", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1108 02:32:19.582390  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2/status: (3.982173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.582741  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 35904
I1108 02:32:19.582764  111868 pv_controller.go:796] volume "pv-w-cannotbind-2" entered phase "Available"
I1108 02:32:19.582791  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 35901
I1108 02:32:19.582813  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Available, bound to: "", boundByController: false
I1108 02:32:19.582833  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1108 02:32:19.582855  111868 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1108 02:32:19.582863  111868 pv_controller.go:778] updating PersistentVolume[pv-w-cannotbind-1]: phase Available already set
I1108 02:32:19.582882  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 35904
I1108 02:32:19.582893  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Available, bound to: "", boundByController: false
I1108 02:32:19.582910  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1108 02:32:19.582916  111868 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1108 02:32:19.582922  111868 pv_controller.go:778] updating PersistentVolume[pv-w-cannotbind-2]: phase Available already set
I1108 02:32:19.587143  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (5.492674ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41776]
I1108 02:32:19.587195  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (5.33901ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43056]
I1108 02:32:19.589380  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind-2
I1108 02:32:19.589531  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind-2
I1108 02:32:19.589873  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind-2", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2" on node "node-1"
I1108 02:32:19.590185  111868 scheduler_binder.go:725] storage class "wait-rp5q" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2" does not support dynamic provisioning
I1108 02:32:19.590791  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind-2", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2" on node "node-2"
I1108 02:32:19.591035  111868 scheduler_binder.go:725] storage class "wait-rp5q" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2" does not support dynamic provisioning
I1108 02:32:19.591212  111868 factory.go:632] Unable to schedule volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1108 02:32:19.591346  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I1108 02:32:19.599169  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (10.407152ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43054]
I1108 02:32:19.601692  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (9.831948ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.602256  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind-2: (9.037071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43060]
I1108 02:32:19.603523  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind-2/status: (8.875251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43062]
I1108 02:32:19.605862  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind-2: (1.670268ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.606229  111868 generic_scheduler.go:341] Preemption will not help schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind-2 on any node.
I1108 02:32:19.701207  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-cannotbind-2: (12.435634ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.705107  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-cannotbind-1: (3.142584ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.707084  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-cannotbind-2: (1.509191ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.710560  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-1: (3.00686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.713157  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-2: (1.915445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.721930  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind-2
I1108 02:32:19.721974  111868 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-cannotbind-2
I1108 02:32:19.725288  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.601325ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43054]
I1108 02:32:19.727787  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (14.110449ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.736266  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-1" deleted
I1108 02:32:19.738266  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (9.915867ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.738666  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-cannotbind-2" deleted
I1108 02:32:19.746995  111868 pv_controller_base.go:216] volume "pv-w-cannotbind-1" deleted
I1108 02:32:19.748466  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (9.693811ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.749962  111868 pv_controller_base.go:216] volume "pv-w-cannotbind-2" deleted
I1108 02:32:19.764478  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (15.431342ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.764686  111868 volume_binding_test.go:191] Running test immediate can bind
I1108 02:32:19.766834  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.88092ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.769015  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.603364ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.776346  111868 httplog.go:90] POST /api/v1/persistentvolumes: (5.666533ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.778548  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind", version 35997
I1108 02:32:19.778594  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Pending, bound to: "", boundByController: false
I1108 02:32:19.778616  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1108 02:32:19.778624  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Available
I1108 02:32:19.779528  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (2.532263ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.780135  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind", version 36000
I1108 02:32:19.780174  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:19.780203  111868 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: no volume found
I1108 02:32:19.780220  111868 pv_controller.go:1324] provisionClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: started
E1108 02:32:19.780274  111868 pv_controller.go:1329] error finding provisioning plugin for claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind: no volume plugin matched
I1108 02:32:19.780410  111868 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7", Name:"pvc-i-canbind", UID:"08468d7f-938c-4a0f-8e72-02f786550446", APIVersion:"v1", ResourceVersion:"36000", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I1108 02:32:19.782587  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (3.658746ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43054]
I1108 02:32:19.783208  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 36002
I1108 02:32:19.783237  111868 pv_controller.go:796] volume "pv-i-canbind" entered phase "Available"
I1108 02:32:19.783278  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 36002
I1108 02:32:19.783296  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I1108 02:32:19.783317  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1108 02:32:19.783328  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Available
I1108 02:32:19.783348  111868 pv_controller.go:778] updating PersistentVolume[pv-i-canbind]: phase Available already set
I1108 02:32:19.783669  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (3.401475ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.784356  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind
I1108 02:32:19.784670  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind
E1108 02:32:19.784987  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:19.785119  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:19.791529  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
I1108 02:32:19.792422  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (10.734596ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43124]
I1108 02:32:19.794378  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.248955ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.803989  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (3.271908ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43054]
I1108 02:32:19.812100  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind/status: (18.745616ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43124]
E1108 02:32:19.812498  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:19.812702  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind
I1108 02:32:19.812719  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind
E1108 02:32:19.813034  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:19.813038  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:19.813088  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:19.813120  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
E1108 02:32:19.813141  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:19.817914  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.427084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:19.818607  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.58629ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.887655  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.075946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:19.986633  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.093946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.086208  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.696837ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.186663  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.974574ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.286413  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.857398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.386596  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.982357ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.486321  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.760167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.586442  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.873146ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.686275  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.735468ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.786418  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.854102ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.887934  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.360938ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:20.986866  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.278471ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.087100  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.102736ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.186591  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.953906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.286395  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.758832ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.386212  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.742765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.487269  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.787923ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.586943  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.399132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.686497  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.011269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.789483  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (4.964986ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.887136  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.430371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:21.986831  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.966492ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.086339  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.747834ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.186580  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.019027ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.287009  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.474223ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.388744  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.811111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.486368  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.809531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.586422  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.868754ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.686180  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.657083ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.787160  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.843575ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.887012  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.934611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:22.986416  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.905823ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.086493  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.918204ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.186543  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.089258ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.288068  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.595955ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.386935  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.413777ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.486633  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.028362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.586677  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.813865ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.687982  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.418431ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.786405  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.885216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.886861  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.271386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:23.986236  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.727892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.087510  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.996899ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.186418  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.843585ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.288021  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.42829ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.386628  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.022027ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.488422  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.808456ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.587126  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.574856ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.686752  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.224002ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.786352  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.754285ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.887188  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.47006ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:24.987906  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.303618ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.092542  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (7.965572ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.129798  111868 httplog.go:90] GET /api/v1/namespaces/default: (1.700296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.131604  111868 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.411448ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.133268  111868 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.218575ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.186212  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.749699ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.289373  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (4.776258ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.389018  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (4.501973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.487056  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.873274ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.586581  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.981353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.687880  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.388355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.787509  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.924249ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.886585  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.020311ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:25.986526  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.870086ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.086147  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.606542ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.186608  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.89124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.286568  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.985736ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.386470  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.995021ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.486444  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.897053ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.588027  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.509544ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.686491  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.939415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.786176  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.670514ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.886605  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.059173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:26.986500  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.954367ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.086654  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.099726ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.186554  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.983594ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.286683  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.029116ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.386757  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.13809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.486710  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.090386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.586522  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.973932ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.686536  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.962614ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.786607  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.004121ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.886777  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.251294ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:27.992555  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (8.0288ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.086374  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.767165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.186607  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.031156ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.286624  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.079875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.386484  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.921433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.489657  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (5.100174ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.588558  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (4.101155ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.687677  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.132967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.788859  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (4.289547ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.886435  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.861354ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:28.986742  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.172468ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.086198  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.693482ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.186489  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.882848ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.286730  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.09057ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.387688  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.176306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.487587  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.869493ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.586540  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.927106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.686129  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.623592ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.786563  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.018585ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.886572  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.048916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:29.986439  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.880887ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:30.086339  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.797354ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:30.188605  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (4.072501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:30.287430  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.904543ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:30.386488  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.018704ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:30.411175  111868 pv_controller_base.go:426] resyncing PV controller
I1108 02:32:30.411289  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 36002
I1108 02:32:30.411334  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I1108 02:32:30.411355  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1108 02:32:30.411361  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Available
I1108 02:32:30.411368  111868 pv_controller.go:778] updating PersistentVolume[pv-i-canbind]: phase Available already set
I1108 02:32:30.411387  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" with version 36000
I1108 02:32:30.411399  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:30.411428  111868 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Available, bound to: "", boundByController: false
I1108 02:32:30.411436  111868 pv_controller.go:929] binding volume "pv-i-canbind" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind"
I1108 02:32:30.411442  111868 pv_controller.go:827] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind"
I1108 02:32:30.411483  111868 pv_controller.go:847] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" bound to volume "pv-i-canbind"
I1108 02:32:30.417192  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38008
I1108 02:32:30.417247  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind (uid: 08468d7f-938c-4a0f-8e72-02f786550446)", boundByController: true
I1108 02:32:30.417262  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind
I1108 02:32:30.417283  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:30.417300  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:32:30.417576  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind
I1108 02:32:30.417598  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind
E1108 02:32:30.417800  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:30.417889  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:30.417922  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
E1108 02:32:30.417939  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:30.420804  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind: (8.835516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:30.421207  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38008
I1108 02:32:30.421246  111868 pv_controller.go:860] updating PersistentVolume[pv-i-canbind]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind"
I1108 02:32:30.421259  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1108 02:32:30.421419  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.069084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:30.421808  111868 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events/pod-i-canbind.15d50f2f1a305120: (3.055584ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:30.428147  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38013
I1108 02:32:30.428207  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind (uid: 08468d7f-938c-4a0f-8e72-02f786550446)", boundByController: true
I1108 02:32:30.428223  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind
I1108 02:32:30.428242  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:30.428258  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:32:30.428719  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (6.597282ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42600]
I1108 02:32:30.429083  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38013
I1108 02:32:30.429119  111868 pv_controller.go:796] volume "pv-i-canbind" entered phase "Bound"
I1108 02:32:30.429134  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: binding to "pv-i-canbind"
I1108 02:32:30.429160  111868 pv_controller.go:899] volume "pv-i-canbind" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind"
I1108 02:32:30.434411  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-canbind: (4.798709ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:30.435067  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" with version 38018
I1108 02:32:30.435104  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: bound to "pv-i-canbind"
I1108 02:32:30.435117  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind] status: set phase Bound
I1108 02:32:30.439515  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-canbind/status: (3.99906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:30.439891  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" with version 38021
I1108 02:32:30.439929  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" entered phase "Bound"
I1108 02:32:30.439950  111868 pv_controller.go:955] volume "pv-i-canbind" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind"
I1108 02:32:30.439977  111868 pv_controller.go:956] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind (uid: 08468d7f-938c-4a0f-8e72-02f786550446)", boundByController: true
I1108 02:32:30.439993  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1108 02:32:30.440030  111868 pv_controller_base.go:533] storeObjectUpdate: ignoring claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" version 38018
I1108 02:32:30.441083  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" with version 38021
I1108 02:32:30.441116  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1108 02:32:30.441140  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind (uid: 08468d7f-938c-4a0f-8e72-02f786550446)", boundByController: true
I1108 02:32:30.441150  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: claim is already correctly bound
I1108 02:32:30.441164  111868 pv_controller.go:929] binding volume "pv-i-canbind" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind"
I1108 02:32:30.441176  111868 pv_controller.go:827] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind"
I1108 02:32:30.441193  111868 pv_controller.go:839] updating PersistentVolume[pv-i-canbind]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind"
I1108 02:32:30.441201  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1108 02:32:30.441207  111868 pv_controller.go:778] updating PersistentVolume[pv-i-canbind]: phase Bound already set
I1108 02:32:30.441215  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: binding to "pv-i-canbind"
I1108 02:32:30.441231  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind]: already bound to "pv-i-canbind"
I1108 02:32:30.441238  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind] status: set phase Bound
I1108 02:32:30.441253  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind] status: phase Bound already set
I1108 02:32:30.441262  111868 pv_controller.go:955] volume "pv-i-canbind" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind"
I1108 02:32:30.441274  111868 pv_controller.go:956] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind (uid: 08468d7f-938c-4a0f-8e72-02f786550446)", boundByController: true
I1108 02:32:30.441284  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1108 02:32:30.486528  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.03004ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:30.586899  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.304785ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:30.686596  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.024522ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:30.786591  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.014508ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:30.886672  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.07229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:30.986520  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.004979ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.086680  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.169395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.187181  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.569479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.286873  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.333304ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.389613  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (4.789945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.488457  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.896506ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.588248  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (3.659673ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.686528  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.093339ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.786219  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.693943ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.886388  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.889027ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:31.986761  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.151154ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.086516  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (1.91775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.186926  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (2.251174ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.216420  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind
I1108 02:32:32.216471  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind
I1108 02:32:32.216759  111868 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind" match with Node "node-1"
I1108 02:32:32.216763  111868 scheduler_binder.go:653] PersistentVolume "pv-i-canbind", Node "node-2" mismatch for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind": No matching NodeSelectorTerms
I1108 02:32:32.216887  111868 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind", node "node-1"
I1108 02:32:32.216914  111868 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind", node "node-1": all PVCs bound and nothing to do
I1108 02:32:32.217004  111868 factory.go:698] Attempting to bind pod-i-canbind to node-1
I1108 02:32:32.220515  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind/binding: (2.984266ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.220990  111868 scheduler.go:756] pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1108 02:32:32.223907  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.498784ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.292658  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-canbind: (7.486973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.295135  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-canbind: (1.735033ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.302772  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind: (7.11112ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.323886  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (20.390881ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.333817  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" deleted
I1108 02:32:32.333904  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38013
I1108 02:32:32.333946  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind (uid: 08468d7f-938c-4a0f-8e72-02f786550446)", boundByController: true
I1108 02:32:32.333957  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind
I1108 02:32:32.334734  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (10.24119ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.337351  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-canbind: (3.068756ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.337612  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind not found
I1108 02:32:32.337666  111868 pv_controller.go:573] volume "pv-i-canbind" is released and reclaim policy "Retain" will be executed
I1108 02:32:32.337677  111868 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Released
I1108 02:32:32.343463  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (5.392286ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.344049  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38369
I1108 02:32:32.344380  111868 pv_controller.go:796] volume "pv-i-canbind" entered phase "Released"
I1108 02:32:32.344576  111868 pv_controller.go:1009] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1108 02:32:32.344696  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38369
I1108 02:32:32.344863  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Released, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind (uid: 08468d7f-938c-4a0f-8e72-02f786550446)", boundByController: true
I1108 02:32:32.344962  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind
I1108 02:32:32.345054  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind not found
I1108 02:32:32.345180  111868 pv_controller.go:1009] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1108 02:32:32.348145  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (12.178249ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.348577  111868 pv_controller_base.go:216] volume "pv-i-canbind" deleted
I1108 02:32:32.348727  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-canbind" was already processed
I1108 02:32:32.364041  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (14.789849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.364900  111868 volume_binding_test.go:191] Running test immediate cannot bind
I1108 02:32:32.368414  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.19675ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.372158  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.122095ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.375487  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (2.811516ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.375997  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-cannotbind", version 38391
I1108 02:32:32.376023  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:32.376049  111868 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-cannotbind]: no volume found
I1108 02:32:32.376058  111868 pv_controller.go:1324] provisionClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-cannotbind]: started
E1108 02:32:32.376088  111868 pv_controller.go:1329] error finding provisioning plugin for claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-cannotbind: no volume plugin matched
I1108 02:32:32.376168  111868 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7", Name:"pvc-i-cannotbind", UID:"41512ca0-e017-4152-83ff-5c2e342f7dbd", APIVersion:"v1", ResourceVersion:"38391", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I1108 02:32:32.379444  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (3.205937ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
I1108 02:32:32.379748  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind
I1108 02:32:32.379776  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind
E1108 02:32:32.380000  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:32.380051  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:32.380080  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1108 02:32:32.382344  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (5.864446ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.387328  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (5.067135ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46718]
I1108 02:32:32.388503  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-cannotbind/status: (6.296809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46070]
E1108 02:32:32.389150  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:32.389354  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind
I1108 02:32:32.389376  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind
E1108 02:32:32.389581  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:32.389626  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:32.389655  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
E1108 02:32:32.389672  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:32.391980  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-cannotbind: (9.513148ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46716]
I1108 02:32:32.394450  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-cannotbind: (4.361124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
E1108 02:32:32.395001  111868 factory.go:673] pod is already present in unschedulableQ
I1108 02:32:32.395334  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (5.253016ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46718]
I1108 02:32:32.483047  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-i-cannotbind: (2.03477ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.485359  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-i-cannotbind: (1.720433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.492328  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind
I1108 02:32:32.492387  111868 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-i-cannotbind
I1108 02:32:32.495326  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.472382ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46716]
I1108 02:32:32.498813  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (12.959983ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.504894  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (5.596135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.505140  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-i-cannotbind" deleted
I1108 02:32:32.507834  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (2.453967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.520876  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (12.401124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.521265  111868 volume_binding_test.go:191] Running test wait can bind
I1108 02:32:32.524366  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.701372ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.528650  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.294248ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.532431  111868 httplog.go:90] POST /api/v1/persistentvolumes: (3.232917ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.533234  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind", version 38468
I1108 02:32:32.533275  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I1108 02:32:32.533295  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1108 02:32:32.533304  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Available
I1108 02:32:32.535652  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind", version 38471
I1108 02:32:32.535697  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:32.535727  111868 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: no volume found
I1108 02:32:32.535750  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind] status: set phase Pending
I1108 02:32:32.535765  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind] status: phase Pending already set
I1108 02:32:32.536784  111868 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7", Name:"pvc-w-canbind", UID:"cdaaa7fb-1933-4192-89b0-7705bc66bd2c", APIVersion:"v1", ResourceVersion:"38471", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1108 02:32:32.537331  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (4.076213ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.537974  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (4.418923ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46716]
I1108 02:32:32.538198  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 38473
I1108 02:32:32.538223  111868 pv_controller.go:796] volume "pv-w-canbind" entered phase "Available"
I1108 02:32:32.539158  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 38473
I1108 02:32:32.539262  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I1108 02:32:32.539377  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1108 02:32:32.539457  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Available
I1108 02:32:32.539542  111868 pv_controller.go:778] updating PersistentVolume[pv-w-canbind]: phase Available already set
I1108 02:32:32.539384  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.291208ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:32.542610  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (4.42433ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I1108 02:32:32.544076  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind
I1108 02:32:32.544241  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind
I1108 02:32:32.544619  111868 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind" on node "node-1"
I1108 02:32:32.544630  111868 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind", PVC "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" on node "node-2"
I1108 02:32:32.545241  111868 scheduler_binder.go:725] storage class "wait-jx5m" of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" does not support dynamic provisioning
I1108 02:32:32.545429  111868 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind", node "node-1"
I1108 02:32:32.545476  111868 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind", version 38473
I1108 02:32:32.545605  111868 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind", node "node-1"
I1108 02:32:32.545624  111868 scheduler_binder.go:404] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" bound to volume "pv-w-canbind"
I1108 02:32:32.549856  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind: (3.74829ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:32.550186  111868 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.550954  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 38477
I1108 02:32:32.551061  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind (uid: cdaaa7fb-1933-4192-89b0-7705bc66bd2c)", boundByController: true
I1108 02:32:32.551208  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind
I1108 02:32:32.551318  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:32.551456  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:32:32.551604  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" with version 38471
I1108 02:32:32.551624  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:32.551669  111868 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind (uid: cdaaa7fb-1933-4192-89b0-7705bc66bd2c)", boundByController: true
I1108 02:32:32.551681  111868 pv_controller.go:929] binding volume "pv-w-canbind" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.551693  111868 pv_controller.go:827] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.551718  111868 pv_controller.go:839] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.551728  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1108 02:32:32.561532  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 38480
I1108 02:32:32.561598  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind (uid: cdaaa7fb-1933-4192-89b0-7705bc66bd2c)", boundByController: true
I1108 02:32:32.561613  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind
I1108 02:32:32.561633  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1108 02:32:32.561659  111868 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1108 02:32:32.562024  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (9.778678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:32.562464  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 38480
I1108 02:32:32.562501  111868 pv_controller.go:796] volume "pv-w-canbind" entered phase "Bound"
I1108 02:32:32.562517  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: binding to "pv-w-canbind"
I1108 02:32:32.562534  111868 pv_controller.go:899] volume "pv-w-canbind" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.570408  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind: (7.445378ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:32.571050  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" with version 38484
I1108 02:32:32.571083  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: bound to "pv-w-canbind"
I1108 02:32:32.571095  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind] status: set phase Bound
I1108 02:32:32.578999  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind/status: (7.02177ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:32.579316  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" with version 38490
I1108 02:32:32.579345  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" entered phase "Bound"
I1108 02:32:32.579363  111868 pv_controller.go:955] volume "pv-w-canbind" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.579391  111868 pv_controller.go:956] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind (uid: cdaaa7fb-1933-4192-89b0-7705bc66bd2c)", boundByController: true
I1108 02:32:32.579407  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1108 02:32:32.579440  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" with version 38490
I1108 02:32:32.579452  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1108 02:32:32.579469  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind (uid: cdaaa7fb-1933-4192-89b0-7705bc66bd2c)", boundByController: true
I1108 02:32:32.579479  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: claim is already correctly bound
I1108 02:32:32.579488  111868 pv_controller.go:929] binding volume "pv-w-canbind" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.579499  111868 pv_controller.go:827] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.579519  111868 pv_controller.go:839] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.579529  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1108 02:32:32.579537  111868 pv_controller.go:778] updating PersistentVolume[pv-w-canbind]: phase Bound already set
I1108 02:32:32.579547  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: binding to "pv-w-canbind"
I1108 02:32:32.579566  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind]: already bound to "pv-w-canbind"
I1108 02:32:32.579575  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind] status: set phase Bound
I1108 02:32:32.579594  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind] status: phase Bound already set
I1108 02:32:32.579607  111868 pv_controller.go:955] volume "pv-w-canbind" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind"
I1108 02:32:32.579627  111868 pv_controller.go:956] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind (uid: cdaaa7fb-1933-4192-89b0-7705bc66bd2c)", boundByController: true
I1108 02:32:32.579646  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1108 02:32:32.645714  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (2.083533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:32.748035  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (4.378789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:32.847140  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (3.516959ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:32.945694  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (2.000843ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.046014  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (2.246517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.146550  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (2.832382ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.216561  111868 cache.go:656] Couldn't expire cache for pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind. Binding is still in progress.
I1108 02:32:33.246059  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (2.321135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.346626  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (2.855861ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.445924  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (2.224488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.545397  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (1.76504ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.550456  111868 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind" are bound
I1108 02:32:33.550528  111868 factory.go:698] Attempting to bind pod-w-canbind to node-1
I1108 02:32:33.553039  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind/binding: (2.186158ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.553389  111868 scheduler.go:756] pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1108 02:32:33.557488  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.755178ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.646890  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-canbind: (3.193123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.649971  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind: (2.499145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.651937  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.466263ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.662218  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (9.497905ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.669800  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (6.943732ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.670652  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" deleted
I1108 02:32:33.670700  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 38480
I1108 02:32:33.670734  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind (uid: cdaaa7fb-1933-4192-89b0-7705bc66bd2c)", boundByController: true
I1108 02:32:33.670745  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind
I1108 02:32:33.672230  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-canbind: (1.244021ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46716]
I1108 02:32:33.672469  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind not found
I1108 02:32:33.672492  111868 pv_controller.go:573] volume "pv-w-canbind" is released and reclaim policy "Retain" will be executed
I1108 02:32:33.672505  111868 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Released
I1108 02:32:33.674772  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (1.922318ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46716]
I1108 02:32:33.675059  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 38684
I1108 02:32:33.675093  111868 pv_controller.go:796] volume "pv-w-canbind" entered phase "Released"
I1108 02:32:33.675105  111868 pv_controller.go:1009] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I1108 02:32:33.675129  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 38684
I1108 02:32:33.675152  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Released, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind (uid: cdaaa7fb-1933-4192-89b0-7705bc66bd2c)", boundByController: true
I1108 02:32:33.675164  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind
I1108 02:32:33.675194  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind not found
I1108 02:32:33.675201  111868 pv_controller.go:1009] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I1108 02:32:33.675768  111868 store.go:231] deletion of /89d04f6a-c49a-49bf-9f59-18031bc0a51b/persistentvolumes/pv-w-canbind failed because of a conflict, going to retry
I1108 02:32:33.679259  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (8.945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.679870  111868 pv_controller_base.go:216] volume "pv-w-canbind" deleted
I1108 02:32:33.679931  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-canbind" was already processed
I1108 02:32:33.692290  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (12.177038ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.692553  111868 volume_binding_test.go:191] Running test wait pvc prebound
I1108 02:32:33.694593  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.78533ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.698661  111868 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.521662ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.703347  111868 httplog.go:90] POST /api/v1/persistentvolumes: (3.025952ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.704023  111868 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-pvc-prebound", version 38699
I1108 02:32:33.704198  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1108 02:32:33.704293  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1108 02:32:33.704384  111868 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1108 02:32:33.708698  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (4.050519ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.709494  111868 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound", version 38702
I1108 02:32:33.709517  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:32:33.709529  111868 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1108 02:32:33.709556  111868 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I1108 02:32:33.709575  111868 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: volume is unbound, binding
I1108 02:32:33.709594  111868 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:33.709606  111868 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:33.709628  111868 pv_controller.go:847] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I1108 02:32:33.710801  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (5.354051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46716]
I1108 02:32:33.711236  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38701
I1108 02:32:33.711265  111868 pv_controller.go:796] volume "pv-w-pvc-prebound" entered phase "Available"
I1108 02:32:33.711293  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38701
I1108 02:32:33.711310  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1108 02:32:33.711327  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1108 02:32:33.711332  111868 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1108 02:32:33.711338  111868 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I1108 02:32:33.711606  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.693861ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:33.711800  111868 pv_controller.go:850] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:33.711825  111868 pv_controller.go:932] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:33.711858  111868 pv_controller_base.go:251] could not sync claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:33.713020  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (2.643925ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46936]
I1108 02:32:33.713301  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound
I1108 02:32:33.713623  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound
E1108 02:32:33.713836  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:33.713836  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:33.713901  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:33.713933  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1108 02:32:33.716337  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (1.594742ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:33.718776  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound/status: (4.365929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
E1108 02:32:33.719098  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:33.719474  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (5.049444ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46716]
I1108 02:32:33.719513  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound
I1108 02:32:33.719529  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound
E1108 02:32:33.719726  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:33.719898  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:33.719938  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1108 02:32:33.719954  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:33.721749  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.48418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
E1108 02:32:33.722030  111868 factory.go:673] pod is already present in unschedulableQ
I1108 02:32:33.722531  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.254589ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:33.816170  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.770691ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:33.916232  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.84718ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.015935  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.614531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.116261  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.909401ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.216917  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.413497ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.316640  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.26274ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.418907  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (4.602747ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.516267  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.923753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.618161  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (3.789779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.715934  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.590159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.831417  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (17.129406ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:34.916445  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.080111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.017271  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.907793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.116439  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.121839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.130047  111868 httplog.go:90] GET /api/v1/namespaces/default: (1.650651ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.133823  111868 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (3.354913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.136259  111868 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.80029ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.216090  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.747599ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.316031  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.695298ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.415979  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.717ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.516319  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.965964ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.616280  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.910459ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.716473  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.116668ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.815932  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.625204ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:35.916401  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.071965ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.016304  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.971578ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.116551  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.220013ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.216346  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.98013ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.316182  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.821752ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.416524  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.088073ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.516381  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.081676ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.616439  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.105504ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.716268  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.940557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.816270  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.879135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:36.916403  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.031719ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.016118  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.739154ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.116369  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.917714ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.216357  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.964378ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.316212  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.905945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.416312  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.939773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.516408  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.058598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.616585  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.25065ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.716652  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.280801ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.816443  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.126491ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:37.916434  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.141517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.016941  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.491615ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.116295  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.994078ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.216152  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.838442ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.316338  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.966908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.416500  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.095029ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.520086  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (5.716153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.616494  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.123803ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.716555  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.200169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.816515  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.038743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:38.916367  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.970431ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.016468  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.030159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.116560  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.156265ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.216717  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.227068ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.316607  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.281567ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.416389  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.963539ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.516324  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.979547ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.616017  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.714527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.716803  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.45028ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.815880  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.556062ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:39.916327  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.031405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.016294  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.864775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.116385  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.995231ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.216320  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.04684ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.316246  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.879277ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.416693  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.337302ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.516783  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.398808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.616665  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.333155ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.716474  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.092168ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.818036  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (3.52946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:40.916057  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.721147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.016422  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.800349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.116430  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.053705ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.217167  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.755315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.316227  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.904066ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.416290  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.985316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.517192  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.945504ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.616183  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.876824ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.716632  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.312988ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.816442  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.133022ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.916582  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.196455ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.947450  111868 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.824251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.949672  111868 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.70528ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:41.951644  111868 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.482804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.016079  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.672222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.116178  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.951138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.216356  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.962417ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.316750  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.316656ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.416453  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.969048ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.517697  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (3.465186ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.616513  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.180594ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.716596  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.214568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.816251  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.022536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:42.916559  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.268671ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.016880  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.593967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.116534  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.20701ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.216551  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.143489ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.316613  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.117277ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.416505  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.099401ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.516070  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.755796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.616327  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.992006ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.716470  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.1092ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.816395  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.038538ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:43.916207  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.871692ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.016354  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.059678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.116261  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.924247ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.218816  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (4.548007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.316558  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.787601ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.416583  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.194684ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.516332  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.938722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.616036  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.696801ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.716057  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.772356ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.815971  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.646058ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:44.916601  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.190714ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.016368  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.975544ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.116255  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.86208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.130570  111868 httplog.go:90] GET /api/v1/namespaces/default: (1.758382ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.133110  111868 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.877636ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.135234  111868 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.575734ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.216184  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.864473ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.316600  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.224949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.411429  111868 pv_controller_base.go:426] resyncing PV controller
I1108 02:32:45.411543  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38701
I1108 02:32:45.411587  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1108 02:32:45.411602  111868 pv_controller.go:492] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1108 02:32:45.411610  111868 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1108 02:32:45.411617  111868 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I1108 02:32:45.411636  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" with version 38702
I1108 02:32:45.411656  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:32:45.411668  111868 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1108 02:32:45.411681  111868 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1108 02:32:45.411693  111868 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: volume is unbound, binding
I1108 02:32:45.411720  111868 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:45.411733  111868 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:45.411769  111868 pv_controller.go:847] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I1108 02:32:45.415589  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (3.277253ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.416577  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40029
I1108 02:32:45.416618  111868 pv_controller.go:860] updating PersistentVolume[pv-w-pvc-prebound]: bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:45.416632  111868 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1108 02:32:45.416585  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound
I1108 02:32:45.416736  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound
E1108 02:32:45.416989  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:45.417002  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40029
I1108 02:32:45.417074  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound (uid: 3c5cd26a-d4b6-48c5-82bb-492a8e1f466c)", boundByController: true
I1108 02:32:45.417123  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound
E1108 02:32:45.417147  111868 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1108 02:32:45.417230  111868 factory.go:648] Error scheduling volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1108 02:32:45.417269  111868 scheduler.go:774] Updating pod condition for volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1108 02:32:45.417289  111868 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1108 02:32:45.417154  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:32:45.417493  111868 pv_controller.go:617] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1108 02:32:45.417510  111868 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1108 02:32:45.419257  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (4.925262ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:45.424339  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (7.232372ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.424708  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (6.264701ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49036]
I1108 02:32:45.425705  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40030
I1108 02:32:45.425748  111868 pv_controller.go:796] volume "pv-w-pvc-prebound" entered phase "Bound"
I1108 02:32:45.425769  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1108 02:32:45.425805  111868 pv_controller.go:899] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:45.426219  111868 store.go:365] GuaranteedUpdate of /89d04f6a-c49a-49bf-9f59-18031bc0a51b/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I1108 02:32:45.427963  111868 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events/pod-w-pvc-prebound.15d50f3257198ac2: (9.418495ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:45.429257  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (11.034528ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49034]
I1108 02:32:45.429571  111868 pv_controller.go:788] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:45.429599  111868 pv_controller_base.go:204] could not sync volume "pv-w-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1108 02:32:45.429675  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40030
I1108 02:32:45.429718  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound (uid: 3c5cd26a-d4b6-48c5-82bb-492a8e1f466c)", boundByController: true
I1108 02:32:45.429734  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound
I1108 02:32:45.429876  111868 pv_controller.go:553] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1108 02:32:45.429899  111868 pv_controller.go:617] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1108 02:32:45.429910  111868 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1108 02:32:45.429922  111868 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1108 02:32:45.435964  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-prebound: (8.784515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46938]
I1108 02:32:45.436620  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" with version 40032
I1108 02:32:45.436653  111868 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: bound to "pv-w-pvc-prebound"
I1108 02:32:45.436667  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound] status: set phase Bound
I1108 02:32:45.444256  111868 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-prebound/status: (5.956458ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:45.444649  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" with version 40033
I1108 02:32:45.444681  111868 pv_controller.go:740] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" entered phase "Bound"
I1108 02:32:45.444696  111868 pv_controller.go:955] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:45.444715  111868 pv_controller.go:956] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound (uid: 3c5cd26a-d4b6-48c5-82bb-492a8e1f466c)", boundByController: true
I1108 02:32:45.444727  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1108 02:32:45.444754  111868 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" with version 40033
I1108 02:32:45.444763  111868 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1108 02:32:45.444776  111868 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: volume "pv-w-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound (uid: 3c5cd26a-d4b6-48c5-82bb-492a8e1f466c)", boundByController: true
I1108 02:32:45.444784  111868 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: claim is already correctly bound
I1108 02:32:45.444791  111868 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:45.444798  111868 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:45.444817  111868 pv_controller.go:839] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:45.444826  111868 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1108 02:32:45.444834  111868 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1108 02:32:45.444858  111868 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1108 02:32:45.444878  111868 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound]: already bound to "pv-w-pvc-prebound"
I1108 02:32:45.444887  111868 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound] status: set phase Bound
I1108 02:32:45.444906  111868 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound] status: phase Bound already set
I1108 02:32:45.444918  111868 pv_controller.go:955] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound"
I1108 02:32:45.444939  111868 pv_controller.go:956] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound (uid: 3c5cd26a-d4b6-48c5-82bb-492a8e1f466c)", boundByController: true
I1108 02:32:45.444953  111868 pv_controller.go:957] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1108 02:32:45.516473  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.979102ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:45.616744  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.321012ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:45.716523  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.116296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:45.816428  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.012488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:45.917132  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.679023ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.021112  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (6.727828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.116351  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.930551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.215807  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.533581ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.316099  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.777256ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.416597  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.246213ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.516429  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.082009ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.616342  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.020402ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.716271  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.929826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.816343  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.979164ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:46.916375  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.000782ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.016358  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.936839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.116279  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.968543ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.216302  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (1.951951ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.219579  111868 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound
I1108 02:32:47.219616  111868 scheduler.go:611] Attempting to schedule pod: volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound
I1108 02:32:47.219910  111868 scheduler_binder.go:653] PersistentVolume "pv-w-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound": No matching NodeSelectorTerms
I1108 02:32:47.219912  111868 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound" match with Node "node-1"
I1108 02:32:47.220011  111868 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound", node "node-1"
I1108 02:32:47.220023  111868 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I1108 02:32:47.220095  111868 factory.go:698] Attempting to bind pod-w-pvc-prebound to node-1
I1108 02:32:47.223211  111868 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound/binding: (2.754429ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.223478  111868 scheduler.go:756] pod volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-w-pvc-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1108 02:32:47.226660  111868 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/events: (2.751069ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.316368  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods/pod-w-pvc-prebound: (2.029802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.318684  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-prebound: (1.676675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.320989  111868 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.856991ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.328749  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (7.187493ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.334486  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (5.134699ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.334774  111868 pv_controller_base.go:265] claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" deleted
I1108 02:32:47.334814  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40030
I1108 02:32:47.334885  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound (uid: 3c5cd26a-d4b6-48c5-82bb-492a8e1f466c)", boundByController: true
I1108 02:32:47.334901  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound
I1108 02:32:47.336374  111868 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims/pvc-w-prebound: (1.207591ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:47.336681  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound not found
I1108 02:32:47.336711  111868 pv_controller.go:573] volume "pv-w-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1108 02:32:47.336727  111868 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Released
I1108 02:32:47.359080  111868 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (22.015827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46770]
I1108 02:32:47.359393  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40127
I1108 02:32:47.359432  111868 pv_controller.go:796] volume "pv-w-pvc-prebound" entered phase "Released"
I1108 02:32:47.359445  111868 pv_controller.go:1009] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I1108 02:32:47.360270  111868 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40127
I1108 02:32:47.360303  111868 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Released, bound to: "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound (uid: 3c5cd26a-d4b6-48c5-82bb-492a8e1f466c)", boundByController: true
I1108 02:32:47.360314  111868 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound
I1108 02:32:47.360331  111868 pv_controller.go:545] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound not found
I1108 02:32:47.360337  111868 pv_controller.go:1009] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I1108 02:32:47.390575  111868 store.go:231] deletion of /89d04f6a-c49a-49bf-9f59-18031bc0a51b/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I1108 02:32:47.394873  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (59.862734ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.396251  111868 pv_controller_base.go:216] volume "pv-w-pvc-prebound" deleted
I1108 02:32:47.396307  111868 pv_controller_base.go:403] deletion of claim "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pvc-w-prebound" was already processed
I1108 02:32:47.414005  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (18.610982ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.414404  111868 volume_binding_test.go:920] test cluster "volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7" start to tear down
I1108 02:32:47.417026  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pods: (2.302381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.420455  111868 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/persistentvolumeclaims: (3.057933ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.423624  111868 httplog.go:90] DELETE /api/v1/persistentvolumes: (2.721802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.426020  111868 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (1.984108ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.426806  111868 pv_controller_base.go:305] Shutting down persistent volume controller
I1108 02:32:47.426828  111868 pv_controller_base.go:416] claim worker queue shutting down
I1108 02:32:47.427361  111868 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=31946&timeout=9m52s&timeoutSeconds=592&watch=true: (1m2.208522278s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33796]
I1108 02:32:47.427470  111868 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=31947&timeout=7m17s&timeoutSeconds=437&watch=true: (1m2.187568524s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33824]
I1108 02:32:47.427499  111868 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=31946&timeout=7m27s&timeoutSeconds=447&watch=true: (1m2.115921404s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I1108 02:32:47.427617  111868 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=31947&timeout=5m19s&timeoutSeconds=319&watch=true: (1m2.115813113s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33830]
I1108 02:32:47.427738  111868 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=31945&timeout=9m37s&timeoutSeconds=577&watch=true: (1m2.115318419s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33838]
I1108 02:32:47.427789  111868 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=31945&timeout=9m45s&timeoutSeconds=585&watch=true: (1m2.196475269s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33790]
I1108 02:32:47.427997  111868 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=31945&timeout=6m18s&timeoutSeconds=378&watch=true: (1m2.116041799s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1108 02:32:47.428140  111868 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=31945&timeout=5m13s&timeoutSeconds=313&watch=true: (1m2.20621763s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I1108 02:32:47.428408  111868 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=31950&timeout=9m43s&timeoutSeconds=583&watch=true: (1m2.215100712s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33802]
I1108 02:32:47.428567  111868 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=31950&timeout=8m42s&timeoutSeconds=522&watch=true: (1m2.117306231s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33832]
I1108 02:32:47.429160  111868 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=31949&timeout=6m34s&timeoutSeconds=394&watch=true: (1m2.205955373s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33800]
I1108 02:32:47.429381  111868 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=31952&timeout=8m45s&timeoutSeconds=525&watch=true: (1m2.194302687s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33818]
I1108 02:32:47.429565  111868 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=31952&timeout=7m40s&timeoutSeconds=460&watch=true: (1m2.192099186s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33820]
I1108 02:32:47.430270  111868 pv_controller_base.go:359] volume worker queue shutting down
I1108 02:32:47.430484  111868 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=31948&timeout=9m19s&timeoutSeconds=559&watch=true: (1m2.190604484s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33822]
I1108 02:32:47.430980  111868 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=32317&timeout=7m15s&timeoutSeconds=435&watch=true: (1m2.215773506s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I1108 02:32:47.431340  111868 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=31950&timeout=7m29s&timeoutSeconds=449&watch=true: (1m2.217283479s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33804]
I1108 02:32:47.459727  111868 httplog.go:90] DELETE /api/v1/nodes: (28.017882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.460021  111868 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1108 02:32:47.462356  111868 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.824986ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.468287  111868 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (5.375445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49038]
I1108 02:32:47.468643  111868 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I1108 02:32:47.469245  111868 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=31945&timeout=5m47s&timeoutSeconds=347&watch=true: (1m5.567238252s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33106]
--- FAIL: TestVolumeBinding (65.94s)
    volume_binding_test.go:243: Failed to schedule Pod "pod-i-pvc-prebound": timed out waiting for the condition

				from junit_99844db6e586a0ff1ded59c41b65ce7fe8e8a77e_20191108-022359.xml

Find volume-scheduling-3d335748-39f6-411b-bcf9-6defe2033fb7/pod-mix-bound mentions in log files | View test history on testgrid


Show 2894 Passed Tests