This job view page is being replaced by Spyglass soon. Check out the new job view.
PRalculquicondor: Fix cmd/kubelet/app lint issues
ResultFAILURE
Tests 1 failed / 622 succeeded
Started2019-02-11 20:30
Elapsed27m42s
Revision
Buildergke-prow-containerd-pool-99179761-4377
Refs master:f7c4389b
73926:17a63544
podc3fae4dd-2e3b-11e9-8de8-0a580a6c0524
infra-commit7b101c686
podc3fae4dd-2e3b-11e9-8de8-0a580a6c0524
repok8s.io/kubernetes
repo-commit3d89da41e8b456c8ed59a0dde26ca58d7fe240aa
repos{u'k8s.io/kubernetes': u'master:f7c4389b793cd6cf0de8d67f2c5db28b3985ad59,73926:17a635448aaa804a26afca89cad420f9f2e6a7b6'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestVolumeBinding 1m21s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestVolumeBinding$
I0211 20:52:26.360210  123370 feature_gate.go:226] feature gates: &{map[PodPriority:true TaintNodesByCondition:true PersistentLocalVolumes:true]}
I0211 20:52:26.361485  123370 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0211 20:52:26.361568  123370 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0211 20:52:26.361611  123370 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I0211 20:52:26.361690  123370 master.go:228] Using reconciler: 
I0211 20:52:26.364970  123370 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.365112  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.365136  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.365186  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.365333  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.366265  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.366356  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.366813  123370 store.go:1310] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0211 20:52:26.366883  123370 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.367097  123370 reflector.go:170] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0211 20:52:26.367161  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.367191  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.367237  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.367461  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.368099  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.368147  123370 store.go:1310] Monitoring events count at <storage-prefix>//events
I0211 20:52:26.368180  123370 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.368238  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.368314  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.368335  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.368374  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.368491  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.368965  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.369096  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.369750  123370 store.go:1310] Monitoring limitranges count at <storage-prefix>//limitranges
I0211 20:52:26.369801  123370 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.369853  123370 reflector.go:170] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0211 20:52:26.369903  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.369919  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.370005  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.370078  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.370499  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.370596  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.371181  123370 store.go:1310] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0211 20:52:26.371330  123370 reflector.go:170] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0211 20:52:26.371368  123370 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.371483  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.371517  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.371558  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.371638  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.372139  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.372236  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.372882  123370 store.go:1310] Monitoring secrets count at <storage-prefix>//secrets
I0211 20:52:26.372968  123370 reflector.go:170] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0211 20:52:26.373671  123370 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.373765  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.373789  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.373833  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.373890  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.374448  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.374510  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.375058  123370 store.go:1310] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0211 20:52:26.375136  123370 reflector.go:170] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0211 20:52:26.375272  123370 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.375345  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.375360  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.375391  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.375488  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.375848  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.375925  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.376319  123370 store.go:1310] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0211 20:52:26.376363  123370 reflector.go:170] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0211 20:52:26.376515  123370 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.376591  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.376610  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.376634  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.376673  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.377127  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.377172  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.377668  123370 store.go:1310] Monitoring configmaps count at <storage-prefix>//configmaps
I0211 20:52:26.377828  123370 reflector.go:170] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0211 20:52:26.377874  123370 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.377992  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.378015  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.378048  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.378119  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.378721  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.378762  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.379182  123370 store.go:1310] Monitoring namespaces count at <storage-prefix>//namespaces
I0211 20:52:26.379263  123370 reflector.go:170] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0211 20:52:26.379364  123370 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.379478  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.379494  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.379527  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.379618  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.380072  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.380467  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.380767  123370 store.go:1310] Monitoring endpoints count at <storage-prefix>//endpoints
I0211 20:52:26.380933  123370 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.381050  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.381081  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.381132  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.381059  123370 reflector.go:170] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0211 20:52:26.381298  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.381911  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.382106  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.382479  123370 store.go:1310] Monitoring nodes count at <storage-prefix>//nodes
I0211 20:52:26.382536  123370 reflector.go:170] Listing and watching *core.Node from storage/cacher.go:/nodes
I0211 20:52:26.382619  123370 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.382719  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.382746  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.382787  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.382867  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.383300  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.383381  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.383922  123370 store.go:1310] Monitoring pods count at <storage-prefix>//pods
I0211 20:52:26.384079  123370 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.384174  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.384190  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.384224  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.384266  123370 reflector.go:170] Listing and watching *core.Pod from storage/cacher.go:/pods
I0211 20:52:26.384518  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.384986  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.385094  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.385797  123370 store.go:1310] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0211 20:52:26.385835  123370 reflector.go:170] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0211 20:52:26.386008  123370 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.386115  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.386148  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.386270  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.386458  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.386828  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.386916  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.387567  123370 store.go:1310] Monitoring services count at <storage-prefix>//services
I0211 20:52:26.387613  123370 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.387629  123370 reflector.go:170] Listing and watching *core.Service from storage/cacher.go:/services
I0211 20:52:26.387797  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.387819  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.387868  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.388042  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.388478  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.388933  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.389009  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.389045  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.389093  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.389235  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.389646  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.389743  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.389876  123370 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.389991  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.390017  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.390057  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.390125  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.390487  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.390535  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.391213  123370 store.go:1310] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0211 20:52:26.391307  123370 reflector.go:170] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0211 20:52:26.407115  123370 master.go:407] Skipping disabled API group "auditregistration.k8s.io".
I0211 20:52:26.407172  123370 master.go:415] Enabling API group "authentication.k8s.io".
I0211 20:52:26.407185  123370 master.go:415] Enabling API group "authorization.k8s.io".
I0211 20:52:26.407371  123370 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.407559  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.407587  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.407636  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.407717  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.408284  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.408507  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.409252  123370 store.go:1310] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0211 20:52:26.409306  123370 reflector.go:170] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0211 20:52:26.409466  123370 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.409575  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.409592  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.409667  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.409874  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.410622  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.411319  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.412080  123370 store.go:1310] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0211 20:52:26.412309  123370 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.412444  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.412472  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.412503  123370 reflector.go:170] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0211 20:52:26.412515  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.412961  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.413650  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.413760  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.414308  123370 store.go:1310] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0211 20:52:26.414347  123370 master.go:415] Enabling API group "autoscaling".
I0211 20:52:26.414557  123370 reflector.go:170] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0211 20:52:26.414588  123370 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.414682  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.414697  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.414771  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.414821  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.415241  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.415374  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.416066  123370 store.go:1310] Monitoring jobs.batch count at <storage-prefix>//jobs
I0211 20:52:26.416345  123370 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.416616  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.416655  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.416720  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.416732  123370 reflector.go:170] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0211 20:52:26.416807  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.417494  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.417837  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.418312  123370 store.go:1310] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0211 20:52:26.418371  123370 master.go:415] Enabling API group "batch".
I0211 20:52:26.418676  123370 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.418799  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.418822  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.418857  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.418912  123370 reflector.go:170] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0211 20:52:26.419191  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.420079  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.420174  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.422826  123370 store.go:1310] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0211 20:52:26.422865  123370 master.go:415] Enabling API group "certificates.k8s.io".
I0211 20:52:26.422963  123370 reflector.go:170] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0211 20:52:26.423308  123370 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.423403  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.423444  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.423483  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.423544  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.424025  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.424115  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.424547  123370 store.go:1310] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0211 20:52:26.424615  123370 reflector.go:170] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0211 20:52:26.424964  123370 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.425086  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.425110  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.425146  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.425195  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.425887  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.425999  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.426650  123370 store.go:1310] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0211 20:52:26.426679  123370 master.go:415] Enabling API group "coordination.k8s.io".
I0211 20:52:26.426722  123370 reflector.go:170] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0211 20:52:26.426870  123370 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.427010  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.427037  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.427070  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.427158  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.427686  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.428145  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.428375  123370 store.go:1310] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0211 20:52:26.428615  123370 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.428728  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.428755  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.428757  123370 reflector.go:170] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0211 20:52:26.428791  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.428866  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.429270  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.429334  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.430048  123370 store.go:1310] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0211 20:52:26.430098  123370 reflector.go:170] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0211 20:52:26.430280  123370 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.430442  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.430463  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.430495  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.430596  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.431134  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.431288  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.431794  123370 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0211 20:52:26.431854  123370 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0211 20:52:26.431975  123370 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.432060  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.432073  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.432105  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.432169  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.433145  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.433226  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.433751  123370 store.go:1310] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0211 20:52:26.433801  123370 reflector.go:170] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0211 20:52:26.433930  123370 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.434066  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.434094  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.434129  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.434189  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.434706  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.434804  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.435442  123370 store.go:1310] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0211 20:52:26.435481  123370 reflector.go:170] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0211 20:52:26.435618  123370 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.435711  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.435749  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.435797  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.435858  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.436561  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.436607  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.437277  123370 store.go:1310] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0211 20:52:26.437331  123370 reflector.go:170] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0211 20:52:26.437489  123370 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.437601  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.437627  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.437662  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.437729  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.438188  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.438268  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.438825  123370 store.go:1310] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0211 20:52:26.438860  123370 master.go:415] Enabling API group "extensions".
I0211 20:52:26.438961  123370 reflector.go:170] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0211 20:52:26.439033  123370 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.439135  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.439156  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.439191  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.439302  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.439697  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.439821  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.440400  123370 store.go:1310] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0211 20:52:26.440453  123370 master.go:415] Enabling API group "networking.k8s.io".
I0211 20:52:26.440488  123370 reflector.go:170] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0211 20:52:26.440818  123370 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.440930  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.440964  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.441039  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.441089  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.441465  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.441568  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.442145  123370 store.go:1310] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0211 20:52:26.442230  123370 reflector.go:170] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0211 20:52:26.442367  123370 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.442492  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.442511  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.442543  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.442583  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.443088  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.443213  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.443769  123370 store.go:1310] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0211 20:52:26.443789  123370 master.go:415] Enabling API group "policy".
I0211 20:52:26.443832  123370 reflector.go:170] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0211 20:52:26.443831  123370 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.443956  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.443973  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.444005  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.444047  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.444478  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.444582  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.444967  123370 store.go:1310] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0211 20:52:26.445040  123370 reflector.go:170] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0211 20:52:26.445335  123370 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.445648  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.445703  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.445793  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.445908  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.446348  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.446440  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.446965  123370 store.go:1310] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0211 20:52:26.447021  123370 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.447124  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.447149  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.447181  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.447293  123370 reflector.go:170] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0211 20:52:26.447314  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.448791  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.448891  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.449567  123370 store.go:1310] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0211 20:52:26.449605  123370 reflector.go:170] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0211 20:52:26.449825  123370 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.449957  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.449984  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.450045  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.450109  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.450526  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.450569  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.451194  123370 reflector.go:170] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0211 20:52:26.451215  123370 store.go:1310] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0211 20:52:26.451341  123370 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.451468  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.451485  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.451523  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.451568  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.453996  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.454094  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.454820  123370 store.go:1310] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0211 20:52:26.454908  123370 reflector.go:170] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0211 20:52:26.455012  123370 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.455107  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.455137  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.455179  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.455319  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.455790  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.456106  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.456459  123370 store.go:1310] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0211 20:52:26.456586  123370 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.456672  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.456697  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.456739  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.456485  123370 reflector.go:170] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0211 20:52:26.457044  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.459368  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.459498  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.460301  123370 store.go:1310] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0211 20:52:26.460438  123370 reflector.go:170] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0211 20:52:26.460564  123370 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.460695  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.460720  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.460765  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.461121  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.461679  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.461913  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.462124  123370 store.go:1310] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0211 20:52:26.462177  123370 master.go:415] Enabling API group "rbac.authorization.k8s.io".
I0211 20:52:26.462185  123370 reflector.go:170] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0211 20:52:26.464556  123370 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.464659  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.464682  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.464734  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.464809  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.465177  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.465249  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.465681  123370 store.go:1310] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0211 20:52:26.465712  123370 master.go:415] Enabling API group "scheduling.k8s.io".
I0211 20:52:26.465733  123370 master.go:407] Skipping disabled API group "settings.k8s.io".
I0211 20:52:26.465753  123370 reflector.go:170] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0211 20:52:26.466154  123370 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.466284  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.466303  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.466339  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.466395  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.466819  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.467115  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.467344  123370 store.go:1310] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0211 20:52:26.467392  123370 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.467475  123370 reflector.go:170] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0211 20:52:26.467551  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.467579  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.467615  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.467693  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.468895  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.469007  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.469368  123370 store.go:1310] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0211 20:52:26.469474  123370 reflector.go:170] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0211 20:52:26.469636  123370 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.469739  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.469750  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.469772  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.470480  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.471989  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.472317  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.472894  123370 store.go:1310] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0211 20:52:26.472929  123370 reflector.go:170] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0211 20:52:26.472956  123370 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.473065  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.473095  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.473131  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.473225  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.473662  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.473773  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.474509  123370 store.go:1310] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0211 20:52:26.474543  123370 master.go:415] Enabling API group "storage.k8s.io".
I0211 20:52:26.474600  123370 reflector.go:170] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0211 20:52:26.474743  123370 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.475017  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.475098  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.475228  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.475316  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.476129  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.476193  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.476765  123370 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0211 20:52:26.476887  123370 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0211 20:52:26.476965  123370 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.477062  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.477093  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.477126  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.477355  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.477763  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.477851  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.478557  123370 store.go:1310] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0211 20:52:26.478619  123370 reflector.go:170] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0211 20:52:26.478756  123370 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.478848  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.478876  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.478956  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:26.478986  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.479042  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.479482  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.479589  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.479740  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:26.479767  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:26.479809  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:26.480024  123370 store.go:1310] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0211 20:52:26.480214  123370 reflector.go:170] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0211 20:52:26.480336  123370 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.480484  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.480504  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.480538  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.480586  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.480614  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:26.480969  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.481005  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:26.481869  123370 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0211 20:52:26.482120  123370 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.482194  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.482213  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.482277  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.482330  123370 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0211 20:52:26.481364  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.482650  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.483113  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.483151  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.483764  123370 store.go:1310] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0211 20:52:26.483806  123370 reflector.go:170] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0211 20:52:26.483922  123370 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.484079  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.484103  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.484154  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.484229  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.484659  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.484794  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.485341  123370 store.go:1310] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0211 20:52:26.485445  123370 reflector.go:170] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0211 20:52:26.485562  123370 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.485655  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.485672  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.485781  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.485911  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.486368  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.486599  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.487082  123370 store.go:1310] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0211 20:52:26.487161  123370 reflector.go:170] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0211 20:52:26.487261  123370 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.487362  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.487391  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.487461  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.487533  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.488040  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.488105  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.488769  123370 store.go:1310] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0211 20:52:26.488814  123370 reflector.go:170] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0211 20:52:26.488939  123370 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.489098  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.489115  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.489149  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.489376  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.489973  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.490035  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.490557  123370 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0211 20:52:26.490657  123370 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0211 20:52:26.490739  123370 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.492119  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.492147  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.492214  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.492307  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.492722  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.492792  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.493453  123370 store.go:1310] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0211 20:52:26.493488  123370 reflector.go:170] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0211 20:52:26.493628  123370 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.493729  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.493755  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.493804  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.493906  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.494286  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.494376  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.494932  123370 store.go:1310] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0211 20:52:26.495024  123370 reflector.go:170] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0211 20:52:26.495132  123370 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.495225  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.495258  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.495291  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.495385  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.496379  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.496498  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.497283  123370 store.go:1310] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0211 20:52:26.497357  123370 reflector.go:170] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0211 20:52:26.497476  123370 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.497582  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.497599  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.497639  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.497746  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.498169  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.498207  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.498940  123370 store.go:1310] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0211 20:52:26.498989  123370 master.go:415] Enabling API group "apps".
I0211 20:52:26.499007  123370 reflector.go:170] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0211 20:52:26.499027  123370 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.499177  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.499208  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.499257  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.499352  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.499726  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.499786  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.500347  123370 store.go:1310] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0211 20:52:26.500388  123370 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.500453  123370 reflector.go:170] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0211 20:52:26.500510  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.500603  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.500700  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.500760  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.501136  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.501310  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.501837  123370 store.go:1310] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0211 20:52:26.501887  123370 master.go:415] Enabling API group "admissionregistration.k8s.io".
I0211 20:52:26.501953  123370 reflector.go:170] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0211 20:52:26.501930  123370 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cefbd804-3d51-4834-a729-5f6c5123655d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 20:52:26.502191  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:26.502222  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:26.502268  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:26.502398  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:26.502825  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:26.502870  123370 store.go:1310] Monitoring events count at <storage-prefix>//events
I0211 20:52:26.502901  123370 master.go:415] Enabling API group "events.k8s.io".
I0211 20:52:26.503446  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 20:52:26.510618  123370 genericapiserver.go:330] Skipping API batch/v2alpha1 because it has no resources.
W0211 20:52:26.525851  123370 genericapiserver.go:330] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0211 20:52:26.526754  123370 genericapiserver.go:330] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0211 20:52:26.529911  123370 genericapiserver.go:330] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0211 20:52:26.549209  123370 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 20:52:26.549249  123370 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0211 20:52:26.549261  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:26.549271  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:26.549279  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:26.549507  123370 wrap.go:47] GET /healthz: (1.150139ms) 500
goroutine 93842 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc042518380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc042518380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc02be76fa0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc036e915a8, 0xc03f938680, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc036e915a8, 0xc041664e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc036e915a8, 0xc041664d00)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc036e915a8, 0xc041664d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc042374de0, 0xc01bdcfcc0, 0x60dec80, 0xc036e915a8, 0xc041664d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41326]
I0211 20:52:26.550732  123370 wrap.go:47] GET /api/v1/services: (1.261447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.555196  123370 wrap.go:47] GET /api/v1/services: (1.098937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.558173  123370 wrap.go:47] GET /api/v1/namespaces/default: (978.748µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.560182  123370 wrap.go:47] POST /api/v1/namespaces: (1.509177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.561601  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (949.264µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.565333  123370 wrap.go:47] POST /api/v1/namespaces/default/services: (3.271999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.566899  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.093603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.569522  123370 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.139369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.571686  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.536013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41326]
I0211 20:52:26.571978  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.879663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.574964  123370 wrap.go:47] GET /api/v1/services: (2.455768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:26.574967  123370 wrap.go:47] GET /api/v1/services: (2.913892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41326]
I0211 20:52:26.574973  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (2.301248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41332]
I0211 20:52:26.576842  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.211522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:26.577176  123370 wrap.go:47] POST /api/v1/namespaces: (4.939784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41328]
I0211 20:52:26.578555  123370 wrap.go:47] GET /api/v1/namespaces/kube-public: (898.538µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:26.580602  123370 wrap.go:47] POST /api/v1/namespaces: (1.457679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:26.581956  123370 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (907.808µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:26.583982  123370 wrap.go:47] POST /api/v1/namespaces: (1.633082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:26.650558  123370 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 20:52:26.650600  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:26.650613  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:26.650621  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:26.650838  123370 wrap.go:47] GET /healthz: (452.297µs) 500
goroutine 93841 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0424b9dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0424b9dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d772ee0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0431c8160, 0xc04326a180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1500)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0431c8160, 0xc0431f1500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc042623d40, 0xc01bdcfcc0, 0x60dec80, 0xc0431c8160, 0xc0431f1500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:26.750483  123370 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 20:52:26.750527  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:26.750540  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:26.750549  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:26.750780  123370 wrap.go:47] GET /healthz: (450.712µs) 500
goroutine 93907 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0424b9ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0424b9ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d7730a0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0431c8178, 0xc04326a600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1900)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0431c8178, 0xc0431f1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc042623e00, 0xc01bdcfcc0, 0x60dec80, 0xc0431c8178, 0xc0431f1900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:26.850507  123370 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 20:52:26.850545  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:26.850557  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:26.850566  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:26.850792  123370 wrap.go:47] GET /healthz: (444.18µs) 500
goroutine 93820 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc041e93d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc041e93d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d78f9e0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc040a4de70, 0xc0179a3b00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7c00)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc040a4de70, 0xc0431b7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043234840, 0xc01bdcfcc0, 0x60dec80, 0xc040a4de70, 0xc0431b7c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:26.950479  123370 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 20:52:26.950526  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:26.950538  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:26.950548  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:26.951336  123370 wrap.go:47] GET /healthz: (1.002847ms) 500
goroutine 93657 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0421c5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0421c5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d7e47c0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc04108c408, 0xc0432d4180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc04108c408, 0xc0421af700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc04108c408, 0xc0421af600)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc04108c408, 0xc0421af600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc042157860, 0xc01bdcfcc0, 0x60dec80, 0xc04108c408, 0xc0421af600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:27.050567  123370 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 20:52:27.050616  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.050629  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:27.050638  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:27.050880  123370 wrap.go:47] GET /healthz: (498.596µs) 500
goroutine 93776 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc041b99c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc041b99c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d86a5c0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc039533ec8, 0xc0432f2000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3b00)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc039533ec8, 0xc0424e3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0432ee120, 0xc01bdcfcc0, 0x60dec80, 0xc039533ec8, 0xc0424e3b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:27.150625  123370 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 20:52:27.151202  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.151248  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:27.151267  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:27.151545  123370 wrap.go:47] GET /healthz: (1.039662ms) 500
goroutine 93868 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0422897a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0422897a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d7b6900, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0377b5ed0, 0xc043248600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb700)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0377b5ed0, 0xc0431fb700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0431e3140, 0xc01bdcfcc0, 0x60dec80, 0xc0377b5ed0, 0xc0431fb700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:27.250543  123370 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 20:52:27.250589  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.250602  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:27.250611  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:27.251337  123370 wrap.go:47] GET /healthz: (979.288µs) 500
goroutine 93659 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0421c5340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0421c5340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d7e4fe0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc04108c478, 0xc0432d4780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc04108c478, 0xc043328100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc04108c478, 0xc043328000)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc04108c478, 0xc043328000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc042157e00, 0xc01bdcfcc0, 0x60dec80, 0xc04108c478, 0xc043328000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:27.350554  123370 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 20:52:27.350621  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.350645  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:27.350663  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:27.350858  123370 wrap.go:47] GET /healthz: (490.349µs) 500
goroutine 93822 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc041e93f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc041e93f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d89a020, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc040a4de90, 0xc043342300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc040a4de90, 0xc043344200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc040a4de90, 0xc043344100)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc040a4de90, 0xc043344100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043234b40, 0xc01bdcfcc0, 0x60dec80, 0xc040a4de90, 0xc043344100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:27.361240  123370 clientconn.go:551] parsed scheme: ""
I0211 20:52:27.361304  123370 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 20:52:27.361375  123370 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 20:52:27.361492  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:27.361994  123370 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 20:52:27.362073  123370 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 20:52:27.451440  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.451468  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:27.451477  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:27.451673  123370 wrap.go:47] GET /healthz: (1.327538ms) 500
goroutine 93909 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043398000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043398000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d773600, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0431c81b0, 0xc02ec6b4a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a000)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0431c81b0, 0xc04339a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043396120, 0xc01bdcfcc0, 0x60dec80, 0xc0431c81b0, 0xc04339a000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:27.479654  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:27.479847  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:27.479869  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:27.480018  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:27.480820  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:27.481633  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:27.549973  123370 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.055743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:27.550744  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.392659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.551070  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.730386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41334]
I0211 20:52:27.551585  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.551621  123370 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 20:52:27.551634  123370 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 20:52:27.551822  123370 wrap.go:47] GET /healthz: (1.243385ms) 500
goroutine 93662 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0421c5490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0421c5490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01d7e5e40, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc04108c4d8, 0xc0432406e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328700)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc04108c4d8, 0xc043328700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043504060, 0xc01bdcfcc0, 0x60dec80, 0xc04108c4d8, 0xc043328700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:27.552392  123370 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.478471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41494]
I0211 20:52:27.552847  123370 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0211 20:52:27.553856  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.278418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0211 20:52:27.554187  123370 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.137133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41494]
I0211 20:52:27.555141  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (861.009µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0211 20:52:27.555658  123370 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (4.057942ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.556675  123370 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.012046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41494]
I0211 20:52:27.556905  123370 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0211 20:52:27.556935  123370 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0211 20:52:27.557262  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.657417ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0211 20:52:27.558200  123370 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (2.048634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.558889  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.146832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41494]
I0211 20:52:27.560240  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (885.178µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.561571  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (885.574µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.563455  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.462103ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.565372  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.317847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.567537  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.672687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.567815  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0211 20:52:27.568938  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (871.407µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.570875  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.406274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.571110  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0211 20:52:27.572343  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (986.489µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.574643  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.730586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.574978  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0211 20:52:27.576215  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.005195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.578656  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.942371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.578888  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0211 20:52:27.580187  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.082579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.582288  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.438523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.582549  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0211 20:52:27.583757  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.020179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.586000  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.651044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.586300  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0211 20:52:27.587536  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (906.301µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.589600  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.589929  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0211 20:52:27.591167  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (887.316µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.595877  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.128885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.596233  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0211 20:52:27.597590  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (998.417µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.599848  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.815833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.600275  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0211 20:52:27.601761  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.171407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.604331  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.871896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.604612  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0211 20:52:27.606343  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.463304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.609101  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.264077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.609477  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0211 20:52:27.610963  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.137539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.613001  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.606731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.613217  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0211 20:52:27.614308  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (831.239µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.616447  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.591804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.616671  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0211 20:52:27.617831  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (932.074µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.619880  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.551626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.620112  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0211 20:52:27.621543  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.178093ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.623620  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.610861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.623834  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0211 20:52:27.625356  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.229989ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.627496  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.675278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.627745  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0211 20:52:27.628866  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (911.866µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.630795  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.389251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.631048  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0211 20:52:27.632132  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (878.779µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.634114  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.523158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.634372  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0211 20:52:27.635801  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.148292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.638391  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.027358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.638787  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0211 20:52:27.640145  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.125727ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.642164  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.620812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.642518  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0211 20:52:27.643841  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.07534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.646120  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.740913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.646448  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0211 20:52:27.647889  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.233478ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.650375  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.013647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.650727  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0211 20:52:27.651328  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.651567  123370 wrap.go:47] GET /healthz: (1.393477ms) 500
goroutine 94099 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0437f0d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0437f0d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc04382f1c0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0435a0778, 0xc00423bcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0435a0778, 0xc043817600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0435a0778, 0xc043817500)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0435a0778, 0xc043817500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043832480, 0xc01bdcfcc0, 0x60dec80, 0xc0435a0778, 0xc043817500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:27.651878  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (933.346µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.653972  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.653799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.654334  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0211 20:52:27.655576  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (975.256µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.657675  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.665861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.657898  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0211 20:52:27.659330  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.192364ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.661483  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.711995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.661766  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0211 20:52:27.663070  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.062945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.665988  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.395029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.666197  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0211 20:52:27.667629  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.166515ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.669699  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.528657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.669983  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0211 20:52:27.671019  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (855.76µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.672907  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.382088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.673146  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0211 20:52:27.674335  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (940.799µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.676337  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.676609  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0211 20:52:27.677756  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (915.291µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.679832  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.583523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.680099  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0211 20:52:27.681251  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (876.346µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.683267  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.580402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.683576  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0211 20:52:27.684620  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (815.504µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.686759  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.660736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.687041  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0211 20:52:27.688196  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (920.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.690326  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.67089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.690559  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0211 20:52:27.691637  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (879.36µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.693676  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.571115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.693985  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0211 20:52:27.695114  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (912.569µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.697080  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.470312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.697355  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0211 20:52:27.698368  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (778.055µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.700274  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.417413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.700796  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0211 20:52:27.701809  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (757.948µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.703985  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.669447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.704283  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0211 20:52:27.705307  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (815.408µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.707206  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.474306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.707497  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0211 20:52:27.708535  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (821.55µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.710359  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.386001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.710610  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0211 20:52:27.711706  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (849.634µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.713778  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.551355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.713997  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0211 20:52:27.715128  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (882.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.717245  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.538683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.717560  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0211 20:52:27.718591  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (824.774µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.720810  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.7455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.721072  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0211 20:52:27.722134  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (822.977µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.724248  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.621941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.724486  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0211 20:52:27.725589  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (900.228µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.727575  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.581888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.727781  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0211 20:52:27.728935  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (939.386µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.730899  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.537351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.731109  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0211 20:52:27.732237  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (924.997µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.734211  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.474364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.734493  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0211 20:52:27.735633  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (899.113µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.737552  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.379336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.737762  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0211 20:52:27.738990  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (949.746µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.740865  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.365006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.741098  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0211 20:52:27.742148  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (809.193µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.744264  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.678653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.744576  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0211 20:52:27.745689  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (884.34µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.747833  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.589112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.748088  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0211 20:52:27.749254  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (919.496µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.751070  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.751319  123370 wrap.go:47] GET /healthz: (1.13547ms) 500
goroutine 94097 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0438395e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0438395e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043bfcf60, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0396bd570, 0xc03785b180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1300)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0396bd570, 0xc043bd1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043c200c0, 0xc01bdcfcc0, 0x60dec80, 0xc0396bd570, 0xc043bd1300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:27.751854  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.140507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.752081  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0211 20:52:27.770140  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.575122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.790581  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.973938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.790828  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0211 20:52:27.810023  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.437676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.831032  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.372817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.831283  123370 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0211 20:52:27.850204  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.574432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.851431  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.851677  123370 wrap.go:47] GET /healthz: (1.513682ms) 500
goroutine 94233 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043839f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043839f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043c84980, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0396bd6a0, 0xc03785b540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a600)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0396bd6a0, 0xc043c9a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043c214a0, 0xc01bdcfcc0, 0x60dec80, 0xc0396bd6a0, 0xc043c9a600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:27.870531  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.95512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:27.870773  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0211 20:52:27.889889  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.345513ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:27.911077  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.393211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:27.911932  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0211 20:52:27.931391  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.516011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:27.951112  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.494052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:27.951237  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:27.951357  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0211 20:52:27.951493  123370 wrap.go:47] GET /healthz: (1.305872ms) 500
goroutine 94259 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043c5ed20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043c5ed20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043c8b760, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc043604eb8, 0xc032f48a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87100)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc043604eb8, 0xc043c87100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043c92840, 0xc01bdcfcc0, 0x60dec80, 0xc043604eb8, 0xc043c87100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:27.969679  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.143117ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.990381  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.867554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:27.990650  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0211 20:52:28.009994  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.460591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.031518  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.314308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.031805  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0211 20:52:28.050055  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.486611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.051039  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.051279  123370 wrap.go:47] GET /healthz: (1.087627ms) 500
goroutine 94307 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043cace70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043cace70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043d34c00, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0435a0eb0, 0xc037406b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043d74000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043c45f00)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0435a0eb0, 0xc043c45f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043c31680, 0xc01bdcfcc0, 0x60dec80, 0xc0435a0eb0, 0xc043c45f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:28.070788  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.166334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.071136  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0211 20:52:28.090114  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.459836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.111171  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.62004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.111519  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0211 20:52:28.130193  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.563905ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.151052  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.421942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.151352  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.151824  123370 wrap.go:47] GET /healthz: (1.586742ms) 500
goroutine 94377 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043c5fb20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043c5fb20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043d6d780, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc043605088, 0xc022b61680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc043605088, 0xc043da0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc043605088, 0xc043da0c00)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc043605088, 0xc043da0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043c936e0, 0xc01bdcfcc0, 0x60dec80, 0xc043605088, 0xc043da0c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:28.152211  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0211 20:52:28.170333  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.661739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.190820  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.092815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.191092  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0211 20:52:28.200432  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.489756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:28.202216  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.297076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:28.204029  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.300285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:28.209935  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.210946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.233307  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.118726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.233643  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0211 20:52:28.249955  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.360351ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.251160  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.251592  123370 wrap.go:47] GET /healthz: (1.353307ms) 500
goroutine 94303 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043e204d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043e204d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043daf920, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0396bda90, 0xc037406f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41700)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0396bda90, 0xc043d41700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043d3f980, 0xc01bdcfcc0, 0x60dec80, 0xc0396bda90, 0xc043d41700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:28.271003  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.33104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.271326  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0211 20:52:28.290898  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.904198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.312503  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.87135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.313074  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0211 20:52:28.330131  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.428981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.350634  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.910074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.350972  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0211 20:52:28.351498  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.351912  123370 wrap.go:47] GET /healthz: (1.222607ms) 500
goroutine 94305 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043e207e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043e207e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043dafd60, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0396bdb00, 0xc022b61a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41e00)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0396bdb00, 0xc043d41e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043d3ff20, 0xc01bdcfcc0, 0x60dec80, 0xc0396bdb00, 0xc043d41e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:28.370266  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.574955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.390963  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.292893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.391225  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0211 20:52:28.410491  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.95306ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.430922  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.289268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.431461  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0211 20:52:28.458644  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.458807  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (9.793478ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.458923  123370 wrap.go:47] GET /healthz: (4.145513ms) 500
goroutine 94403 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043e212d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043e212d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043ecf180, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0396bdc00, 0xc036a41180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc600)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0396bdc00, 0xc043ebc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043eba420, 0xc01bdcfcc0, 0x60dec80, 0xc0396bdc00, 0xc043ebc600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:28.470831  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.332306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.471099  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0211 20:52:28.480008  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:28.480051  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:28.480238  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:28.480784  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:28.480956  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:28.481792  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:28.489614  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.089311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.511073  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.431648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.511716  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0211 20:52:28.529806  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.250024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.551394  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.867382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.554517  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0211 20:52:28.554578  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.554837  123370 wrap.go:47] GET /healthz: (4.309299ms) 500
goroutine 94060 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043dcf2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043dcf2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043e75b40, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc043056e70, 0xc032f48dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc043056e70, 0xc043f86700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc043056e70, 0xc043f86600)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc043056e70, 0xc043f86600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043e7ac60, 0xc01bdcfcc0, 0x60dec80, 0xc043056e70, 0xc043f86600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:28.570158  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.457119ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.590674  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.034463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.591520  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0211 20:52:28.609683  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.125826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.630787  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.167375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.631121  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0211 20:52:28.649983  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.332263ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:28.651134  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.651335  123370 wrap.go:47] GET /healthz: (1.016917ms) 500
goroutine 94382 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043f76e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043f76e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043edd8e0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc043605290, 0xc036a417c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc043605290, 0xc0440ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc043605290, 0xc0440ee300)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc043605290, 0xc0440ee300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043f7a540, 0xc01bdcfcc0, 0x60dec80, 0xc043605290, 0xc0440ee300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:28.671030  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.392826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.671304  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0211 20:52:28.689812  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.22552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.711070  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.497077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.711342  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0211 20:52:28.730392  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.687778ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.751186  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.751480  123370 wrap.go:47] GET /healthz: (1.251419ms) 500
goroutine 94453 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043f77490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043f77490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc04413ece0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc043605390, 0xc036a41cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc043605390, 0xc0440ef800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc043605390, 0xc0440ef700)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc043605390, 0xc0440ef700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043f7b440, 0xc01bdcfcc0, 0x60dec80, 0xc043605390, 0xc0440ef700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:28.751523  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.938221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.751834  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0211 20:52:28.770123  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.470804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.790918  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.354835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.791325  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0211 20:52:28.810751  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (2.169666ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.831116  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.388072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.831449  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0211 20:52:28.849719  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.171757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.851257  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.851466  123370 wrap.go:47] GET /healthz: (1.365886ms) 500
goroutine 94412 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0440d2e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0440d2e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc04418a600, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0396bdf10, 0xc0441d0140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7000)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0396bdf10, 0xc0440d7000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043ebbd40, 0xc01bdcfcc0, 0x60dec80, 0xc0396bdf10, 0xc0440d7000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:28.870802  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.165192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.871217  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0211 20:52:28.889879  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.254475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.910912  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.212794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.911209  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0211 20:52:28.931063  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.068066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.950618  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.972033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.951031  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0211 20:52:28.952054  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:28.952264  123370 wrap.go:47] GET /healthz: (1.153207ms) 500
goroutine 94351 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0442029a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0442029a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0441ff6c0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0431c8fa8, 0xc03785bcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222a00)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0431c8fa8, 0xc044222a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043db9bc0, 0xc01bdcfcc0, 0x60dec80, 0xc0431c8fa8, 0xc044222a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:28.969858  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.31833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.990703  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:28.990989  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0211 20:52:29.010246  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.451401ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.031377  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.762393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.031693  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0211 20:52:29.050149  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.201932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.051066  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:29.051321  123370 wrap.go:47] GET /healthz: (1.134715ms) 500
goroutine 94457 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc043f77c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc043f77c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc04413fd60, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0436054a8, 0xc032f49400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0436054a8, 0xc044226900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0436054a8, 0xc044226800)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0436054a8, 0xc044226800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc043f7bc20, 0xc01bdcfcc0, 0x60dec80, 0xc0436054a8, 0xc044226800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:29.070624  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.877655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.071346  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0211 20:52:29.089816  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.317168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.110824  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.105821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.111132  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0211 20:52:29.129501  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (993.6µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.150974  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.432078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.151159  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:29.151233  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0211 20:52:29.151363  123370 wrap.go:47] GET /healthz: (1.099541ms) 500
goroutine 94461 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0442de770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0442de770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0442b6d40, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0436055b8, 0xc032f49900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0436055b8, 0xc044227900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0436055b8, 0xc044227800)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0436055b8, 0xc044227800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0442e6c60, 0xc01bdcfcc0, 0x60dec80, 0xc0436055b8, 0xc044227800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:29.170060  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.378034ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.190987  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.068713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.191264  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0211 20:52:29.209646  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.118899ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.230674  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.230985  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0211 20:52:29.249764  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.14352ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.250991  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:29.251307  123370 wrap.go:47] GET /healthz: (1.145953ms) 500
goroutine 94432 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc044128850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc044128850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc04436c300, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc043810d08, 0xc04437e140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ab00)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc043810d08, 0xc03f58ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc04414a420, 0xc01bdcfcc0, 0x60dec80, 0xc043810d08, 0xc03f58ab00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:29.272827  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.877983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.273114  123370 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0211 20:52:29.289956  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.325408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.292115  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.236251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.310618  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.939531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.310869  123370 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0211 20:52:29.330101  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.428232ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.331990  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.383315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.351755  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:29.351987  123370 wrap.go:47] GET /healthz: (1.57468ms) 500
goroutine 94495 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0442beaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0442beaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0443da900, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0442382f0, 0xc04437e500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de500)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0442382f0, 0xc0443de500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0443e22a0, 0xc01bdcfcc0, 0x60dec80, 0xc0442382f0, 0xc0443de500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:29.352337  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.200147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.352600  123370 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0211 20:52:29.369826  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.26427ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.371991  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.465516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.390964  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.41055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.391315  123370 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0211 20:52:29.409811  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.192193ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.411890  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.50127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.431241  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.619999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.431614  123370 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0211 20:52:29.452149  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:29.452503  123370 wrap.go:47] GET /healthz: (1.392268ms) 500
goroutine 94565 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc044129500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc044129500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0444643c0, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc043810e90, 0xc04437e8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc043810e90, 0xc04446e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc043810e90, 0xc04446e000)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc043810e90, 0xc04446e000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc04414b200, 0xc01bdcfcc0, 0x60dec80, 0xc043810e90, 0xc04446e000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:29.452848  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.744444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.456643  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.501825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.470624  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.005052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.470874  123370 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0211 20:52:29.480188  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:29.480191  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:29.480448  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:29.481003  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:29.481124  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:29.481976  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:29.489507  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.003373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.491389  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.30292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.511872  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.256017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.512138  123370 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0211 20:52:29.529689  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.119242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.531562  123370 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.3221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.551159  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:29.551324  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.691555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.551352  123370 wrap.go:47] GET /healthz: (1.092816ms) 500
goroutine 94509 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0443693b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0443693b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0444f4d60, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0431c93a8, 0xc032b64dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fdf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fde00)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0431c93a8, 0xc0442fde00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc044283d40, 0xc01bdcfcc0, 0x60dec80, 0xc0431c93a8, 0xc0442fde00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41330]
I0211 20:52:29.551583  123370 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0211 20:52:29.570213  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.580126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.571940  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.292193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.592258  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.499413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.592522  123370 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0211 20:52:29.609634  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.065254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.611875  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.624987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.630818  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.291112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.631114  123370 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0211 20:52:29.650081  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.208949ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.651017  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:29.651217  123370 wrap.go:47] GET /healthz: (1.039319ms) 500
goroutine 94537 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc044434ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc044434ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc044680300, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc04108dba8, 0xc032b65180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4300)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc04108dba8, 0xc0446b4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0443577a0, 0xc01bdcfcc0, 0x60dec80, 0xc04108dba8, 0xc0446b4300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:29.651867  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.317691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.670496  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.943285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.670764  123370 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0211 20:52:29.689693  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.207992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.692139  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.90948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.710233  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.710991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.710504  123370 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0211 20:52:29.731277  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.406705ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.733263  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.572695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.750580  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.017618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.751146  123370 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0211 20:52:29.751849  123370 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 20:52:29.752066  123370 wrap.go:47] GET /healthz: (1.11685ms) 500
goroutine 94599 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0446528c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0446528c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0446fcc20, 0x1f4)
net/http.Error(0x7fa53c66c9f8, 0xc0437af638, 0xc04473c000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
net/http.HandlerFunc.ServeHTTP(0xc01d64fce0, 0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0430dea00, 0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc03fead9d0, 0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc03feaabd0, 0xc03fead9d0, 0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9a80, 0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
net/http.HandlerFunc.ServeHTTP(0xc03cb9de00, 0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
net/http.HandlerFunc.ServeHTTP(0xc03ffc9ac0, 0x7fa53c66c9f8, 0xc0437af638, 0xc04464c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa53c66c9f8, 0xc0437af638, 0xc04464c800)
net/http.HandlerFunc.ServeHTTP(0xc040008410, 0x7fa53c66c9f8, 0xc0437af638, 0xc04464c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0444a3440, 0xc01bdcfcc0, 0x60dec80, 0xc0437af638, 0xc04464c800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41492]
I0211 20:52:29.769894  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.290167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.771741  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.322676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.790779  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.14887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.791082  123370 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0211 20:52:29.809966  123370 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.380771ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.811678  123370 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.208475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.830866  123370 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.275739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.831178  123370 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0211 20:52:29.851364  123370 wrap.go:47] GET /healthz: (1.036307ms) 200 [Go-http-client/1.1 127.0.0.1:41492]
W0211 20:52:29.852185  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:29.852363  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:29.852467  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:29.852496  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:29.852552  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:29.852578  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:29.852590  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:29.852686  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:29.852816  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:29.853023  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0211 20:52:29.853077  123370 factory.go:331] Creating scheduler from algorithm provider 'DefaultProvider'
I0211 20:52:29.853092  123370 factory.go:412] Creating scheduler with fit predicates 'map[NoVolumeZoneConflict:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} CheckVolumeBinding:{} GeneralPredicates:{} PodToleratesNodeTaints:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} MatchInterPodAffinity:{} CheckNodeUnschedulable:{} NoDiskConflict:{}]' and priority functions 'map[TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{}]'
I0211 20:52:29.853938  123370 reflector.go:132] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.853986  123370 reflector.go:170] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.854085  123370 reflector.go:132] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.854117  123370 reflector.go:170] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.854262  123370 reflector.go:132] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.854292  123370 reflector.go:170] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.854327  123370 reflector.go:132] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.854356  123370 reflector.go:170] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.854341  123370 reflector.go:132] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.854703  123370 reflector.go:170] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.855311  123370 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (673.39µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:52:29.855556  123370 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (435.794µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41678]
I0211 20:52:29.855561  123370 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (495.245µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41676]
I0211 20:52:29.855781  123370 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (603.516µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41680]
I0211 20:52:29.856136  123370 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30518 labels= fields= timeout=7m48s
I0211 20:52:29.856263  123370 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30522 labels= fields= timeout=6m18s
I0211 20:52:29.856527  123370 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30520 labels= fields= timeout=7m11s
I0211 20:52:29.856662  123370 reflector.go:132] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.856675  123370 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30518 labels= fields= timeout=5m26s
I0211 20:52:29.856756  123370 reflector.go:132] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.856773  123370 reflector.go:170] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.856776  123370 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (2.310697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.856682  123370 reflector.go:170] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.856834  123370 reflector.go:132] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.856884  123370 reflector.go:170] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.857048  123370 reflector.go:132] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.857075  123370 reflector.go:170] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.857863  123370 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (751.259µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41492]
I0211 20:52:29.857924  123370 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (456.803µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41688]
I0211 20:52:29.858001  123370 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (644.08µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41684]
I0211 20:52:29.858009  123370 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30524 labels= fields= timeout=9m25s
I0211 20:52:29.858066  123370 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (526.471µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41690]
I0211 20:52:29.858655  123370 get.go:251] Starting watch for /api/v1/pods, rv=30518 labels= fields= timeout=7m56s
I0211 20:52:29.858740  123370 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30524 labels= fields= timeout=8m46s
I0211 20:52:29.859078  123370 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30518 labels= fields= timeout=5m44s
I0211 20:52:29.859283  123370 get.go:251] Starting watch for /api/v1/services, rv=30531 labels= fields= timeout=9m10s
I0211 20:52:29.859685  123370 reflector.go:132] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.859715  123370 reflector.go:170] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0211 20:52:29.860766  123370 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (560.293µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41696]
I0211 20:52:29.862036  123370 get.go:251] Starting watch for /api/v1/nodes, rv=30518 labels= fields= timeout=7m8s
I0211 20:52:29.960159  123370 shared_informer.go:123] caches populated
I0211 20:52:30.060447  123370 shared_informer.go:123] caches populated
I0211 20:52:30.160638  123370 shared_informer.go:123] caches populated
I0211 20:52:30.260918  123370 shared_informer.go:123] caches populated
I0211 20:52:30.361179  123370 shared_informer.go:123] caches populated
I0211 20:52:30.461442  123370 shared_informer.go:123] caches populated
I0211 20:52:30.480380  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:30.480381  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:30.480637  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:30.481167  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:30.481300  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:30.482217  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:30.561691  123370 shared_informer.go:123] caches populated
I0211 20:52:30.661975  123370 shared_informer.go:123] caches populated
I0211 20:52:30.762165  123370 shared_informer.go:123] caches populated
I0211 20:52:30.862485  123370 shared_informer.go:123] caches populated
I0211 20:52:30.962736  123370 shared_informer.go:123] caches populated
I0211 20:52:30.963456  123370 plugins.go:553] Loaded volume plugin "kubernetes.io/mock-provisioner"
W0211 20:52:30.963498  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:30.963546  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:30.963583  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:30.963605  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 20:52:30.963621  123370 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0211 20:52:30.963679  123370 pv_controller_base.go:271] Starting persistent volume controller
I0211 20:52:30.963691  123370 controller_utils.go:1021] Waiting for caches to sync for persistent volume controller
I0211 20:52:30.964237  123370 reflector.go:132] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.964356  123370 reflector.go:170] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.964313  123370 reflector.go:132] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.964473  123370 reflector.go:170] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.964570  123370 reflector.go:132] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.964628  123370 reflector.go:170] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.964695  123370 reflector.go:132] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.964729  123370 reflector.go:170] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.964990  123370 reflector.go:132] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.965017  123370 reflector.go:170] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0211 20:52:30.966277  123370 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (671.347µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41840]
I0211 20:52:30.966310  123370 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (574.907µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41842]
I0211 20:52:30.966277  123370 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (684.025µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41838]
I0211 20:52:30.966353  123370 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (757.48µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41836]
I0211 20:52:30.966312  123370 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (555.375µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41844]
I0211 20:52:30.967366  123370 get.go:251] Starting watch for /api/v1/nodes, rv=30518 labels= fields= timeout=7m48s
I0211 20:52:30.967452  123370 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30518 labels= fields= timeout=7m15s
I0211 20:52:30.967486  123370 get.go:251] Starting watch for /api/v1/pods, rv=30518 labels= fields= timeout=5m25s
I0211 20:52:30.967776  123370 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30522 labels= fields= timeout=8m28s
I0211 20:52:30.967959  123370 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30518 labels= fields= timeout=5m46s
I0211 20:52:31.063835  123370 shared_informer.go:123] caches populated
I0211 20:52:31.063835  123370 shared_informer.go:123] caches populated
I0211 20:52:31.063909  123370 controller_utils.go:1028] Caches are synced for persistent volume controller
I0211 20:52:31.063931  123370 pv_controller_base.go:157] controller initialized
I0211 20:52:31.064017  123370 pv_controller_base.go:408] resyncing PV controller
I0211 20:52:31.164104  123370 shared_informer.go:123] caches populated
I0211 20:52:31.264318  123370 shared_informer.go:123] caches populated
I0211 20:52:31.364546  123370 shared_informer.go:123] caches populated
I0211 20:52:31.464862  123370 shared_informer.go:123] caches populated
I0211 20:52:31.468190  123370 wrap.go:47] POST /api/v1/nodes: (2.455775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.470532  123370 wrap.go:47] POST /api/v1/nodes: (1.785251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.476561  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (5.480923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.478651  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.648417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.478996  123370 volume_binding_test.go:193] Running test wait can bind
I0211 20:52:31.480613  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:31.480615  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:31.480785  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:31.481089  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.857504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.481306  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:31.481469  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:31.482431  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:31.482971  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.295432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.488628  123370 wrap.go:47] POST /api/v1/persistentvolumes: (5.148957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.489096  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-w-canbind", version 31139
I0211 20:52:31.489149  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I0211 20:52:31.489162  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0211 20:52:31.489171  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind]: set phase Available
I0211 20:52:31.491742  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (1.974501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:31.492302  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind" with version 31140
I0211 20:52:31.492337  123370 pv_controller.go:815] volume "pv-w-canbind" entered phase "Available"
I0211 20:52:31.492369  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind" with version 31140
I0211 20:52:31.492388  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I0211 20:52:31.492398  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0211 20:52:31.492432  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind]: set phase Available
I0211 20:52:31.492445  123370 pv_controller.go:797] updating PersistentVolume[pv-w-canbind]: phase Available already set
I0211 20:52:31.492587  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (3.464616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.493057  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind", version 31141
I0211 20:52:31.493109  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:31.493172  123370 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: no volume found
I0211 20:52:31.493207  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind] status: set phase Pending
I0211 20:52:31.493286  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind] status: phase Pending already set
I0211 20:52:31.493347  123370 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002", Name:"pvc-w-canbind", UID:"f33844bc-2e3e-11e9-8784-0242ac110002", APIVersion:"v1", ResourceVersion:"31141", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0211 20:52:31.495231  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.591948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:31.495553  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (2.465962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.495896  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind
I0211 20:52:31.495915  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind
I0211 20:52:31.496037  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:31.496070  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:31.496152  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:31.496171  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:31.496189  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:31.496193  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:31.496255  123370 scheduler_binder.go:710] Found matching volumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind" on node "node-1"
I0211 20:52:31.496340  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" on node "node-2"
I0211 20:52:31.496367  123370 scheduler_binder.go:736] storage class "wait-9fdh" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" does not support dynamic provisioning
I0211 20:52:31.496467  123370 scheduler_binder.go:269] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind", node "node-1"
I0211 20:52:31.496533  123370 scheduler_assume_cache.go:319] Assumed v1.PersistentVolume "pv-w-canbind", version 31140
I0211 20:52:31.496640  123370 scheduler_binder.go:344] BindPodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind", node "node-1"
I0211 20:52:31.496675  123370 scheduler_binder.go:412] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" bound to volume "pv-w-canbind"
I0211 20:52:31.499194  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind: (2.138344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.499307  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind" with version 31144
I0211 20:52:31.499353  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind (uid: f33844bc-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:31.499367  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind
I0211 20:52:31.499379  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:31.499399  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:52:31.499453  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" with version 31141
I0211 20:52:31.499479  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:31.499568  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind (uid: f33844bc-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:31.499599  123370 pv_controller.go:947] binding volume "pv-w-canbind" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.499619  123370 pv_controller.go:846] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.499636  123370 pv_controller.go:858] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.499663  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind]: set phase Bound
I0211 20:52:31.499634  123370 scheduler_binder.go:417] updating PersistentVolume[pv-w-canbind]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.501912  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (1.978765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.502246  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind" with version 31145
I0211 20:52:31.502279  123370 pv_controller.go:815] volume "pv-w-canbind" entered phase "Bound"
I0211 20:52:31.502291  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: binding to "pv-w-canbind"
I0211 20:52:31.502311  123370 pv_controller.go:917] volume "pv-w-canbind" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.502739  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind" with version 31145
I0211 20:52:31.502780  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind (uid: f33844bc-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:31.502788  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind
I0211 20:52:31.502795  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:31.502808  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:52:31.504628  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind: (2.054621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.504900  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" with version 31146
I0211 20:52:31.504937  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: bound to "pv-w-canbind"
I0211 20:52:31.504957  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind] status: set phase Bound
I0211 20:52:31.507532  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind/status: (2.260431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.507812  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" with version 31147
I0211 20:52:31.507846  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" entered phase "Bound"
I0211 20:52:31.507860  123370 pv_controller.go:973] volume "pv-w-canbind" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.507884  123370 pv_controller.go:974] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind (uid: f33844bc-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:31.507903  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I0211 20:52:31.507941  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" with version 31147
I0211 20:52:31.507974  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I0211 20:52:31.508001  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind (uid: f33844bc-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:31.508013  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: claim is already correctly bound
I0211 20:52:31.508023  123370 pv_controller.go:947] binding volume "pv-w-canbind" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.508040  123370 pv_controller.go:846] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.508066  123370 pv_controller.go:858] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.508081  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind]: set phase Bound
I0211 20:52:31.508119  123370 pv_controller.go:797] updating PersistentVolume[pv-w-canbind]: phase Bound already set
I0211 20:52:31.508127  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: binding to "pv-w-canbind"
I0211 20:52:31.508140  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind]: already bound to "pv-w-canbind"
I0211 20:52:31.508149  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind] status: set phase Bound
I0211 20:52:31.508188  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind] status: phase Bound already set
I0211 20:52:31.508211  123370 pv_controller.go:973] volume "pv-w-canbind" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind"
I0211 20:52:31.508242  123370 pv_controller.go:974] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind (uid: f33844bc-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:31.508252  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I0211 20:52:31.598296  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (1.948699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.698187  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (1.932556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.798084  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (1.81307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.852506  123370 cache.go:530] Couldn't expire cache for pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind. Binding is still in progress.
I0211 20:52:31.898666  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (2.372884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:31.998633  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (2.073392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.098912  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (2.367954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.198154  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (1.85166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.298213  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (1.89193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.398090  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (1.91217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.480868  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:32.480868  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:32.481045  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:32.481487  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:32.481593  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:32.482642  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:32.498240  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (1.966645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.499921  123370 scheduler_binder.go:559] All PVCs for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind" are bound
I0211 20:52:32.499995  123370 factory.go:733] Attempting to bind pod-w-canbind to node-1
I0211 20:52:32.502882  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind/binding: (2.063452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.503570  123370 scheduler.go:571] pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind is bound successfully on node node-1, 2 nodes evaluated, 1 nodes were found feasible
I0211 20:52:32.506057  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.974572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.598110  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind: (1.8092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.600233  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind: (1.558003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.602100  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-w-canbind: (1.345326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.608172  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (5.534637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.612698  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (3.816411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.613023  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" deleted
I0211 20:52:32.613109  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind" with version 31145
I0211 20:52:32.613158  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind (uid: f33844bc-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:32.613167  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind
I0211 20:52:32.614334  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind: (965.258µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.614653  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind not found
I0211 20:52:32.614683  123370 pv_controller.go:592] volume "pv-w-canbind" is released and reclaim policy "Retain" will be executed
I0211 20:52:32.614699  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind]: set phase Released
I0211 20:52:32.616677  123370 store.go:355] GuaranteedUpdate of /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-w-canbind failed because of a conflict, going to retry
I0211 20:52:32.616838  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (3.738329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.616883  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (1.84751ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.617083  123370 pv_controller.go:807] updating PersistentVolume[pv-w-canbind]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-canbind": StorageError: invalid object, Code: 4, Key: /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-w-canbind, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f337a6c4-2e3e-11e9-8784-0242ac110002, UID in object meta: 
I0211 20:52:32.617113  123370 pv_controller_base.go:201] could not sync volume "pv-w-canbind": Operation cannot be fulfilled on persistentvolumes "pv-w-canbind": StorageError: invalid object, Code: 4, Key: /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-w-canbind, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f337a6c4-2e3e-11e9-8784-0242ac110002, UID in object meta: 
I0211 20:52:32.617148  123370 pv_controller_base.go:211] volume "pv-w-canbind" deleted
I0211 20:52:32.617193  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind" was already processed
I0211 20:52:32.627706  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (10.494835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.628015  123370 volume_binding_test.go:193] Running test wait pv prebound
I0211 20:52:32.629659  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.436013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.631747  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.587378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.633798  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.511495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.634355  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-w-prebound", version 31243
I0211 20:52:32.634398  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: )", boundByController: false
I0211 20:52:32.634433  123370 pv_controller.go:523] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound
I0211 20:52:32.634469  123370 pv_controller.go:794] updating PersistentVolume[pv-w-prebound]: set phase Available
I0211 20:52:32.636531  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.890225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.637158  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound", version 31244
I0211 20:52:32.637208  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:32.637266  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Pending, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: )", boundByController: false
I0211 20:52:32.637290  123370 pv_controller.go:947] binding volume "pv-w-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.637303  123370 pv_controller.go:846] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.637213  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.458246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.637336  123370 pv_controller.go:865] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0211 20:52:32.637591  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-prebound" with version 31245
I0211 20:52:32.637636  123370 pv_controller.go:815] volume "pv-w-prebound" entered phase "Available"
I0211 20:52:32.637664  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-prebound" with version 31245
I0211 20:52:32.637695  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: )", boundByController: false
I0211 20:52:32.637703  123370 pv_controller.go:523] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound
I0211 20:52:32.637722  123370 pv_controller.go:794] updating PersistentVolume[pv-w-prebound]: set phase Available
I0211 20:52:32.637742  123370 pv_controller.go:797] updating PersistentVolume[pv-w-prebound]: phase Available already set
I0211 20:52:32.639255  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.399497ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:32.639315  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (2.147128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.639478  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pv-prebound
I0211 20:52:32.639503  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pv-prebound
I0211 20:52:32.639504  123370 pv_controller.go:868] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:52:32.639526  123370 pv_controller.go:950] error binding volume "pv-w-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:52:32.639566  123370 pv_controller_base.go:241] could not sync claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:52:32.639634  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:32.639658  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:32.639811  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:32.639845  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:32.639907  123370 scheduler_binder.go:710] Found matching volumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pv-prebound" on node "node-1"
I0211 20:52:32.639955  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:32.639980  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:32.640036  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pv-prebound", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" on node "node-2"
I0211 20:52:32.640067  123370 scheduler_binder.go:736] storage class "wait-dr6r" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" does not support dynamic provisioning
I0211 20:52:32.640142  123370 scheduler_binder.go:269] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pv-prebound", node "node-1"
I0211 20:52:32.640194  123370 scheduler_assume_cache.go:319] Assumed v1.PersistentVolume "pv-w-prebound", version 31245
I0211 20:52:32.640273  123370 scheduler_binder.go:344] BindPodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pv-prebound", node "node-1"
I0211 20:52:32.640300  123370 scheduler_binder.go:412] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0211 20:52:32.642857  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.200268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.643007  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-prebound" with version 31247
I0211 20:52:32.643050  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: f3e6cc30-2e3e-11e9-8784-0242ac110002)", boundByController: false
I0211 20:52:32.643063  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound
I0211 20:52:32.643104  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:32.643124  123370 pv_controller.go:623] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0211 20:52:32.643272  123370 scheduler_binder.go:417] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.643279  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" with version 31244
I0211 20:52:32.643346  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:32.643379  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: f3e6cc30-2e3e-11e9-8784-0242ac110002)", boundByController: false
I0211 20:52:32.643396  123370 pv_controller.go:947] binding volume "pv-w-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.643437  123370 pv_controller.go:846] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.643462  123370 pv_controller.go:858] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.643477  123370 pv_controller.go:794] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0211 20:52:32.645749  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.954594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.645990  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-prebound" with version 31248
I0211 20:52:32.646023  123370 pv_controller.go:815] volume "pv-w-prebound" entered phase "Bound"
I0211 20:52:32.646036  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0211 20:52:32.646050  123370 pv_controller.go:917] volume "pv-w-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.646120  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-prebound" with version 31248
I0211 20:52:32.646172  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: f3e6cc30-2e3e-11e9-8784-0242ac110002)", boundByController: false
I0211 20:52:32.646246  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound
I0211 20:52:32.646278  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:32.646291  123370 pv_controller.go:623] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0211 20:52:32.648264  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-pv-prebound: (1.934363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.648548  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" with version 31249
I0211 20:52:32.648596  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I0211 20:52:32.648608  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound] status: set phase Bound
I0211 20:52:32.650858  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-pv-prebound/status: (1.931309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.651185  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" with version 31250
I0211 20:52:32.651300  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" entered phase "Bound"
I0211 20:52:32.651328  123370 pv_controller.go:973] volume "pv-w-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.651354  123370 pv_controller.go:974] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: f3e6cc30-2e3e-11e9-8784-0242ac110002)", boundByController: false
I0211 20:52:32.651432  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0211 20:52:32.651480  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" with version 31250
I0211 20:52:32.651515  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0211 20:52:32.651541  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: f3e6cc30-2e3e-11e9-8784-0242ac110002)", boundByController: false
I0211 20:52:32.651558  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: claim is already correctly bound
I0211 20:52:32.651576  123370 pv_controller.go:947] binding volume "pv-w-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.651595  123370 pv_controller.go:846] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.651622  123370 pv_controller.go:858] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.651646  123370 pv_controller.go:794] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0211 20:52:32.651656  123370 pv_controller.go:797] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I0211 20:52:32.651673  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0211 20:52:32.651708  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I0211 20:52:32.651730  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound] status: set phase Bound
I0211 20:52:32.651786  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound] status: phase Bound already set
I0211 20:52:32.651833  123370 pv_controller.go:973] volume "pv-w-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound"
I0211 20:52:32.651871  123370 pv_controller.go:974] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: f3e6cc30-2e3e-11e9-8784-0242ac110002)", boundByController: false
I0211 20:52:32.651883  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0211 20:52:32.741790  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (1.697393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.841968  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (2.03092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:32.852747  123370 cache.go:530] Couldn't expire cache for pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pv-prebound. Binding is still in progress.
I0211 20:52:32.942134  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (2.141674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.042657  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (2.523866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.143494  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (3.397273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.242087  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (1.980839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.341888  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (1.76376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.441838  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (1.807599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.481131  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:33.481132  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:33.481238  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:33.481658  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:33.481727  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:33.482837  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:33.542003  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (1.931662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.642092  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (1.940259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.643562  123370 scheduler_binder.go:559] All PVCs for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pv-prebound" are bound
I0211 20:52:33.643640  123370 factory.go:733] Attempting to bind pod-w-pv-prebound to node-1
I0211 20:52:33.646114  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound/binding: (2.122038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.646344  123370 scheduler.go:571] pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pv-prebound is bound successfully on node node-1, 2 nodes evaluated, 1 nodes were found feasible
I0211 20:52:33.648460  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.736184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.742120  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pv-prebound: (1.996443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.744201  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-pv-prebound: (1.525451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.746463  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-w-prebound: (1.699875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.752237  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (5.307204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.756260  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (3.515509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.756673  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" deleted
I0211 20:52:33.756728  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-prebound" with version 31248
I0211 20:52:33.756773  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: f3e6cc30-2e3e-11e9-8784-0242ac110002)", boundByController: false
I0211 20:52:33.756793  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound
I0211 20:52:33.756813  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound not found
I0211 20:52:33.756829  123370 pv_controller.go:592] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I0211 20:52:33.756845  123370 pv_controller.go:794] updating PersistentVolume[pv-w-prebound]: set phase Released
I0211 20:52:33.758691  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.57058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:33.759018  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-prebound" with version 31298
I0211 20:52:33.759056  123370 pv_controller.go:815] volume "pv-w-prebound" entered phase "Released"
I0211 20:52:33.759070  123370 pv_controller.go:1027] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I0211 20:52:33.759131  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-prebound" with version 31298
I0211 20:52:33.759177  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: phase: Released, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound (uid: f3e6cc30-2e3e-11e9-8784-0242ac110002)", boundByController: false
I0211 20:52:33.759188  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound
I0211 20:52:33.759216  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound not found
I0211 20:52:33.759225  123370 pv_controller.go:1027] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I0211 20:52:33.760333  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (3.663334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.760604  123370 pv_controller_base.go:211] volume "pv-w-prebound" deleted
I0211 20:52:33.760641  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-pv-prebound" was already processed
I0211 20:52:33.766974  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.232137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.767175  123370 volume_binding_test.go:193] Running test wait can bind two
I0211 20:52:33.768703  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.252149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.770606  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.382563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.772589  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.531596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.772728  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-w-canbind-2", version 31304
I0211 20:52:33.772762  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Pending, bound to: "", boundByController: false
I0211 20:52:33.772774  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I0211 20:52:33.772782  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I0211 20:52:33.774585  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.477536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:33.774741  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (1.690847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.775000  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-2" with version 31306
I0211 20:52:33.775046  123370 pv_controller.go:815] volume "pv-w-canbind-2" entered phase "Available"
I0211 20:52:33.775075  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-w-canbind-3", version 31305
I0211 20:52:33.775099  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Pending, bound to: "", boundByController: false
I0211 20:52:33.775110  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I0211 20:52:33.775127  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I0211 20:52:33.777259  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (1.801047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:33.777340  123370 wrap.go:47] POST /api/v1/persistentvolumes: (2.153342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.777583  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-3" with version 31307
I0211 20:52:33.777628  123370 pv_controller.go:815] volume "pv-w-canbind-3" entered phase "Available"
I0211 20:52:33.777758  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-2" with version 31306
I0211 20:52:33.777795  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "", boundByController: false
I0211 20:52:33.777806  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I0211 20:52:33.777813  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I0211 20:52:33.777821  123370 pv_controller.go:797] updating PersistentVolume[pv-w-canbind-2]: phase Available already set
I0211 20:52:33.777889  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-3" with version 31307
I0211 20:52:33.777912  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "", boundByController: false
I0211 20:52:33.777922  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I0211 20:52:33.777953  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I0211 20:52:33.777963  123370 pv_controller.go:797] updating PersistentVolume[pv-w-canbind-3]: phase Available already set
I0211 20:52:33.778018  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-w-canbind-5", version 31308
I0211 20:52:33.778050  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Pending, bound to: "", boundByController: false
I0211 20:52:33.778062  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I0211 20:52:33.778077  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I0211 20:52:33.779289  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.482494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.779491  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2", version 31309
I0211 20:52:33.779530  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:33.779558  123370 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: no volume found
I0211 20:52:33.779583  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2] status: set phase Pending
I0211 20:52:33.779659  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2] status: phase Pending already set
I0211 20:52:33.779724  123370 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002", Name:"pvc-w-canbind-2", UID:"f4953989-2e3e-11e9-8784-0242ac110002", APIVersion:"v1", ResourceVersion:"31309", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0211 20:52:33.780225  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-5/status: (1.772183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:33.780521  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-5" with version 31310
I0211 20:52:33.780604  123370 pv_controller.go:815] volume "pv-w-canbind-5" entered phase "Available"
I0211 20:52:33.780675  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-5" with version 31310
I0211 20:52:33.780696  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Available, bound to: "", boundByController: false
I0211 20:52:33.780706  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I0211 20:52:33.780713  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I0211 20:52:33.780720  123370 pv_controller.go:797] updating PersistentVolume[pv-w-canbind-5]: phase Available already set
I0211 20:52:33.781680  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.89034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.781858  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3", version 31311
I0211 20:52:33.781893  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:33.781909  123370 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: no volume found
I0211 20:52:33.781937  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3] status: set phase Pending
I0211 20:52:33.781971  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3] status: phase Pending already set
I0211 20:52:33.782004  123370 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002", Name:"pvc-w-canbind-3", UID:"f4958d22-2e3e-11e9-8784-0242ac110002", APIVersion:"v1", ResourceVersion:"31311", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0211 20:52:33.782066  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.490343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42034]
I0211 20:52:33.784031  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (1.901682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.784122  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.444046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42034]
I0211 20:52:33.784460  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind-2
I0211 20:52:33.784483  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind-2
I0211 20:52:33.784615  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784631  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784635  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784666  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784751  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784777  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784811  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784811  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784838  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784845  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784853  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784863  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:33.784964  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind-2", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" on node "node-1"
I0211 20:52:33.785001  123370 scheduler_binder.go:736] storage class "wait-hsjj" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" does not support dynamic provisioning
I0211 20:52:33.785053  123370 scheduler_binder.go:710] Found matching volumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind-2" on node "node-2"
I0211 20:52:33.785116  123370 scheduler_binder.go:269] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind-2", node "node-2"
I0211 20:52:33.785160  123370 scheduler_assume_cache.go:319] Assumed v1.PersistentVolume "pv-w-canbind-3", version 31307
I0211 20:52:33.785196  123370 scheduler_assume_cache.go:319] Assumed v1.PersistentVolume "pv-w-canbind-2", version 31306
I0211 20:52:33.785264  123370 scheduler_binder.go:344] BindPodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind-2", node "node-2"
I0211 20:52:33.785292  123370 scheduler_binder.go:412] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" bound to volume "pv-w-canbind-3"
I0211 20:52:33.787275  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-3: (1.699716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.787600  123370 scheduler_binder.go:417] updating PersistentVolume[pv-w-canbind-3]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.787649  123370 scheduler_binder.go:412] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" bound to volume "pv-w-canbind-2"
I0211 20:52:33.787703  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-3" with version 31316
I0211 20:52:33.787753  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 (uid: f4953989-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.787773  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2
I0211 20:52:33.787781  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:33.787788  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:52:33.787807  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" with version 31309
I0211 20:52:33.787839  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:33.787863  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: volume "pv-w-canbind-3" found: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 (uid: f4953989-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.787879  123370 pv_controller.go:947] binding volume "pv-w-canbind-3" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.787905  123370 pv_controller.go:846] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.787951  123370 pv_controller.go:858] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.787965  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I0211 20:52:33.789887  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-2: (1.836593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.789962  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (1.812588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:33.790181  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-3" with version 31318
I0211 20:52:33.790217  123370 pv_controller.go:815] volume "pv-w-canbind-3" entered phase "Bound"
I0211 20:52:33.790229  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: binding to "pv-w-canbind-3"
I0211 20:52:33.790242  123370 pv_controller.go:917] volume "pv-w-canbind-3" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.790243  123370 scheduler_binder.go:417] updating PersistentVolume[pv-w-canbind-2]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.790486  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-2" with version 31317
I0211 20:52:33.790538  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 (uid: f4958d22-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.790556  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3
I0211 20:52:33.790582  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:33.790595  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:52:33.790626  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-3" with version 31318
I0211 20:52:33.790663  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 (uid: f4953989-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.790684  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2
I0211 20:52:33.790705  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:33.790721  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:52:33.792456  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-2: (1.831419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.792655  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" with version 31319
I0211 20:52:33.792688  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: bound to "pv-w-canbind-3"
I0211 20:52:33.792698  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2] status: set phase Bound
I0211 20:52:33.794573  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-2/status: (1.626056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.794822  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" with version 31320
I0211 20:52:33.794862  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" entered phase "Bound"
I0211 20:52:33.794897  123370 pv_controller.go:973] volume "pv-w-canbind-3" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.794924  123370 pv_controller.go:974] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 (uid: f4953989-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.794936  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I0211 20:52:33.794985  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" with version 31311
I0211 20:52:33.795011  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:33.795041  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: volume "pv-w-canbind-2" found: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 (uid: f4958d22-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.795058  123370 pv_controller.go:947] binding volume "pv-w-canbind-2" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.795069  123370 pv_controller.go:846] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.795089  123370 pv_controller.go:858] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.795099  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I0211 20:52:33.797116  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (1.628949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.797320  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-2" with version 31321
I0211 20:52:33.797352  123370 pv_controller.go:815] volume "pv-w-canbind-2" entered phase "Bound"
I0211 20:52:33.797355  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-2" with version 31321
I0211 20:52:33.797395  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 (uid: f4958d22-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.797450  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3
I0211 20:52:33.797464  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:33.797483  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:52:33.797364  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: binding to "pv-w-canbind-2"
I0211 20:52:33.797523  123370 pv_controller.go:917] volume "pv-w-canbind-2" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.799623  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-3: (1.773362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.800021  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" with version 31322
I0211 20:52:33.800063  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: bound to "pv-w-canbind-2"
I0211 20:52:33.800077  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3] status: set phase Bound
I0211 20:52:33.802018  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-3/status: (1.690055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.802265  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" with version 31323
I0211 20:52:33.802299  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" entered phase "Bound"
I0211 20:52:33.802312  123370 pv_controller.go:973] volume "pv-w-canbind-2" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.802329  123370 pv_controller.go:974] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 (uid: f4958d22-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.802359  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I0211 20:52:33.802398  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" with version 31320
I0211 20:52:33.802442  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I0211 20:52:33.802482  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: volume "pv-w-canbind-3" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 (uid: f4953989-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.802500  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: claim is already correctly bound
I0211 20:52:33.802519  123370 pv_controller.go:947] binding volume "pv-w-canbind-3" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.802529  123370 pv_controller.go:846] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.802555  123370 pv_controller.go:858] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.802570  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I0211 20:52:33.802578  123370 pv_controller.go:797] updating PersistentVolume[pv-w-canbind-3]: phase Bound already set
I0211 20:52:33.802610  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: binding to "pv-w-canbind-3"
I0211 20:52:33.802641  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2]: already bound to "pv-w-canbind-3"
I0211 20:52:33.802662  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2] status: set phase Bound
I0211 20:52:33.802678  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2] status: phase Bound already set
I0211 20:52:33.802693  123370 pv_controller.go:973] volume "pv-w-canbind-3" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2"
I0211 20:52:33.802716  123370 pv_controller.go:974] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 (uid: f4953989-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.802733  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I0211 20:52:33.802771  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" with version 31323
I0211 20:52:33.802792  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I0211 20:52:33.802813  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: volume "pv-w-canbind-2" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 (uid: f4958d22-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.802899  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: claim is already correctly bound
I0211 20:52:33.802906  123370 pv_controller.go:947] binding volume "pv-w-canbind-2" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.802912  123370 pv_controller.go:846] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.802926  123370 pv_controller.go:858] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.802931  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I0211 20:52:33.802935  123370 pv_controller.go:797] updating PersistentVolume[pv-w-canbind-2]: phase Bound already set
I0211 20:52:33.802939  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: binding to "pv-w-canbind-2"
I0211 20:52:33.802996  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3]: already bound to "pv-w-canbind-2"
I0211 20:52:33.803006  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3] status: set phase Bound
I0211 20:52:33.803053  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3] status: phase Bound already set
I0211 20:52:33.803073  123370 pv_controller.go:973] volume "pv-w-canbind-2" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3"
I0211 20:52:33.803091  123370 pv_controller.go:974] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 (uid: f4958d22-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:33.803111  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I0211 20:52:33.852923  123370 cache.go:530] Couldn't expire cache for pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind-2. Binding is still in progress.
I0211 20:52:33.886743  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (1.879629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:33.986650  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (1.774153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.086802  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (1.915461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.186601  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (1.789362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.287542  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (2.635397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.386874  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (2.028291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.481376  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:34.481404  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:34.481391  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:34.481846  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:34.481876  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:34.483047  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:34.487603  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (2.775668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.586695  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (1.904695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.686762  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (1.868881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.787222  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (2.347835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.790547  123370 scheduler_binder.go:559] All PVCs for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind-2" are bound
I0211 20:52:34.790596  123370 factory.go:733] Attempting to bind pod-w-canbind-2 to node-2
I0211 20:52:34.792670  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2/binding: (1.846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.793015  123370 scheduler.go:571] pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind-2 is bound successfully on node node-2, 2 nodes evaluated, 1 nodes were found feasible
I0211 20:52:34.794881  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.555152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.886721  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-canbind-2: (1.864725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.888762  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-2: (1.436936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.890520  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-3: (1.324838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.892404  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-w-canbind-2: (1.461827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.894234  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-w-canbind-3: (1.152138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.895894  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-w-canbind-5: (1.22789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.902150  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (5.77576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.907758  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" deleted
I0211 20:52:34.907826  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-3" with version 31318
I0211 20:52:34.907874  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 (uid: f4953989-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:34.907893  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2
I0211 20:52:34.909141  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-2: (1.002155ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:34.909397  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 not found
I0211 20:52:34.909452  123370 pv_controller.go:592] volume "pv-w-canbind-3" is released and reclaim policy "Retain" will be executed
I0211 20:52:34.909478  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-3]: set phase Released
I0211 20:52:34.909488  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (6.677195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.910130  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" deleted
I0211 20:52:34.911931  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (1.933457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:34.912225  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-3" with version 31424
I0211 20:52:34.912350  123370 pv_controller.go:815] volume "pv-w-canbind-3" entered phase "Released"
I0211 20:52:34.912369  123370 pv_controller.go:1027] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I0211 20:52:34.912400  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-2" with version 31321
I0211 20:52:34.912455  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 (uid: f4958d22-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:34.912492  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3
I0211 20:52:34.913646  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-3: (930.547µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:34.913919  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3 not found
I0211 20:52:34.913962  123370 pv_controller.go:592] volume "pv-w-canbind-2" is released and reclaim policy "Retain" will be executed
I0211 20:52:34.913974  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-2]: set phase Released
I0211 20:52:34.915319  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (1.073181ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:34.915570  123370 pv_controller.go:807] updating PersistentVolume[pv-w-canbind-2]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-canbind-2": StorageError: invalid object, Code: 4, Key: /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-w-canbind-2, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f4942f6a-2e3e-11e9-8784-0242ac110002, UID in object meta: 
I0211 20:52:34.915614  123370 pv_controller_base.go:201] could not sync volume "pv-w-canbind-2": Operation cannot be fulfilled on persistentvolumes "pv-w-canbind-2": StorageError: invalid object, Code: 4, Key: /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-w-canbind-2, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f4942f6a-2e3e-11e9-8784-0242ac110002, UID in object meta: 
I0211 20:52:34.915653  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-3" with version 31424
I0211 20:52:34.915707  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Released, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 (uid: f4953989-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:34.915724  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2
I0211 20:52:34.915736  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2 not found
I0211 20:52:34.915760  123370 pv_controller.go:1027] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I0211 20:52:34.915779  123370 pv_controller_base.go:211] volume "pv-w-canbind-2" deleted
I0211 20:52:34.915826  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-3" was already processed
I0211 20:52:34.916472  123370 pv_controller_base.go:211] volume "pv-w-canbind-3" deleted
I0211 20:52:34.916507  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-2" was already processed
I0211 20:52:34.919364  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (9.432448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.919374  123370 pv_controller_base.go:211] volume "pv-w-canbind-5" deleted
I0211 20:52:34.926239  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.311483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.926450  123370 volume_binding_test.go:193] Running test wait cannot bind two
I0211 20:52:34.928195  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.493252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.930460  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.779832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.932791  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.707431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.933051  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-w-cannotbind-1", version 31432
I0211 20:52:34.933092  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Pending, bound to: "", boundByController: false
I0211 20:52:34.933104  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I0211 20:52:34.933109  123370 pv_controller.go:794] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I0211 20:52:34.935264  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.92062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.935776  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-cannotbind-1/status: (2.418459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:34.936352  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 31433
I0211 20:52:34.936397  123370 pv_controller.go:815] volume "pv-w-cannotbind-1" entered phase "Available"
I0211 20:52:34.936458  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-w-cannotbind-2", version 31434
I0211 20:52:34.936502  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Pending, bound to: "", boundByController: false
I0211 20:52:34.936521  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I0211 20:52:34.936528  123370 pv_controller.go:794] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I0211 20:52:34.937621  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.836917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.937907  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1", version 31435
I0211 20:52:34.937975  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:34.938007  123370 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1]: no volume found
I0211 20:52:34.938045  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1] status: set phase Pending
I0211 20:52:34.938061  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1] status: phase Pending already set
I0211 20:52:34.938090  123370 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002", Name:"pvc-w-cannotbind-1", UID:"f545f333-2e3e-11e9-8784-0242ac110002", APIVersion:"v1", ResourceVersion:"31435", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0211 20:52:34.938632  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2/status: (1.893347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:34.938920  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 31436
I0211 20:52:34.938967  123370 pv_controller.go:815] volume "pv-w-cannotbind-2" entered phase "Available"
I0211 20:52:34.938993  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 31433
I0211 20:52:34.939014  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Available, bound to: "", boundByController: false
I0211 20:52:34.939035  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I0211 20:52:34.939043  123370 pv_controller.go:794] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I0211 20:52:34.939060  123370 pv_controller.go:797] updating PersistentVolume[pv-w-cannotbind-1]: phase Available already set
I0211 20:52:34.940389  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 31436
I0211 20:52:34.940463  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Available, bound to: "", boundByController: false
I0211 20:52:34.940486  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I0211 20:52:34.940504  123370 pv_controller.go:794] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I0211 20:52:34.940515  123370 pv_controller.go:797] updating PersistentVolume[pv-w-cannotbind-2]: phase Available already set
I0211 20:52:34.940674  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (2.025316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42186]
I0211 20:52:34.940771  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2", version 31438
I0211 20:52:34.940806  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:52:34.940829  123370 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2]: no volume found
I0211 20:52:34.940859  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2] status: set phase Pending
I0211 20:52:34.940905  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2] status: phase Pending already set
I0211 20:52:34.940674  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (2.574149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0211 20:52:34.940936  123370 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002", Name:"pvc-w-cannotbind-2", UID:"f54646a6-2e3e-11e9-8784-0242ac110002", APIVersion:"v1", ResourceVersion:"31438", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0211 20:52:34.943005  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.527552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42186]
I0211 20:52:34.943766  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (2.189749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:34.943879  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2
I0211 20:52:34.943906  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2
I0211 20:52:34.944046  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944058  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944101  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944126  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944188  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944218  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944235  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944236  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944280  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944243  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944305  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944314  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.944384  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2" on node "node-2"
I0211 20:52:34.944442  123370 scheduler_binder.go:736] storage class "wait-z6n9" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2" does not support dynamic provisioning
I0211 20:52:34.944392  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2" on node "node-1"
I0211 20:52:34.944508  123370 scheduler_binder.go:736] storage class "wait-z6n9" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2" does not support dynamic provisioning
I0211 20:52:34.944572  123370 factory.go:647] Unable to schedule volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I0211 20:52:34.944634  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I0211 20:52:34.946854  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind-2: (1.890291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42186]
I0211 20:52:34.947373  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.438215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:34.947929  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind-2/status: (2.912821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0211 20:52:34.949654  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind-2: (1.209946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:34.950064  123370 generic_scheduler.go:306] Preemption will not help schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2 on any node.
I0211 20:52:34.950169  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2
I0211 20:52:34.950190  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2
I0211 20:52:34.950280  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950300  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950301  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950318  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950374  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950395  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950424  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950434  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950386  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950473  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950501  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950512  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:52:34.950588  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2" on node "node-2"
I0211 20:52:34.950618  123370 scheduler_binder.go:736] storage class "wait-z6n9" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2" does not support dynamic provisioning
I0211 20:52:34.950664  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2" on node "node-1"
I0211 20:52:34.950688  123370 scheduler_binder.go:736] storage class "wait-z6n9" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2" does not support dynamic provisioning
I0211 20:52:34.950744  123370 factory.go:647] Unable to schedule volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I0211 20:52:34.950808  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I0211 20:52:34.953438  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind-2: (2.340522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:34.953559  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind-2/status: (2.357652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42186]
I0211 20:52:34.954000  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-w-cannotbind-2.15826a883154ed4b: (2.438427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:34.955371  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind-2: (1.321644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42186]
I0211 20:52:34.955694  123370 generic_scheduler.go:306] Preemption will not help schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2 on any node.
I0211 20:52:35.046478  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind-2: (1.774899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.048374  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-cannotbind-1: (1.309808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.050035  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-cannotbind-2: (1.143767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.051803  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-w-cannotbind-1: (1.23842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.053403  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-w-cannotbind-2: (1.103178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.058458  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2
I0211 20:52:35.058501  123370 scheduler.go:449] Skip schedule deleting pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind-2
I0211 20:52:35.059961  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (6.067406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.061050  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (2.235404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.064646  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-1" deleted
I0211 20:52:35.066815  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (6.3163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.066917  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind-2" deleted
I0211 20:52:35.071182  123370 pv_controller_base.go:211] volume "pv-w-cannotbind-1" deleted
I0211 20:52:35.072919  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (5.701928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.073106  123370 pv_controller_base.go:211] volume "pv-w-cannotbind-2" deleted
I0211 20:52:35.079170  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (5.853132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.079327  123370 volume_binding_test.go:193] Running test immediate pvc prebound
I0211 20:52:35.080903  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.343969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.082672  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.353184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.084740  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.633269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.084781  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-i-pvc-prebound", version 31459
I0211 20:52:35.084808  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I0211 20:52:35.084820  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I0211 20:52:35.084839  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I0211 20:52:35.087385  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound", version 31460
I0211 20:52:35.087491  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:52:35.087505  123370 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I0211 20:52:35.087523  123370 pv_controller.go:383] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I0211 20:52:35.087543  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (2.312145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.087586  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.327858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.087542  123370 pv_controller.go:387] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume is unbound, binding
I0211 20:52:35.087676  123370 pv_controller.go:947] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:52:35.087730  123370 pv_controller.go:846] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:52:35.087844  123370 pv_controller.go:865] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I0211 20:52:35.087893  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 31461
I0211 20:52:35.088128  123370 pv_controller.go:815] volume "pv-i-pvc-prebound" entered phase "Available"
I0211 20:52:35.088164  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 31461
I0211 20:52:35.088187  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I0211 20:52:35.088201  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I0211 20:52:35.088210  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I0211 20:52:35.088219  123370 pv_controller.go:797] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I0211 20:52:35.089657  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (1.462257ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.089853  123370 pv_controller.go:868] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:52:35.089885  123370 pv_controller.go:950] error binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:52:35.089899  123370 pv_controller_base.go:241] could not sync claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:52:35.090054  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (1.65322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.090622  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
I0211 20:52:35.090644  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
E0211 20:52:35.090980  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:52:35.091058  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:52:35.092847  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.451187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.092964  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.570423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.093808  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound/status: (2.067592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42192]
E0211 20:52:35.094086  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:52:35.094161  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
I0211 20:52:35.094184  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
E0211 20:52:35.094372  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:52:35.094447  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:52:35.096090  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.363274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:35.096856  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound/status: (2.147819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
E0211 20:52:35.097118  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:52:35.097727  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-pvc-prebound.15826a883a0f415e: (2.401502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42194]
I0211 20:52:35.192630  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.768882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.293123  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.141253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.393275  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.27096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.481622  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:35.481665  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:35.481622  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:35.482018  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:35.482070  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:35.483180  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:35.492764  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.935033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.592725  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.76582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.692700  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.825598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.792566  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.660902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.892530  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.742397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:35.993052  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.237902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.092388  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.597785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.192673  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.760055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.292665  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.727736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.392848  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.920378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.481865  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:36.481928  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:36.481972  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:36.482199  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:36.482205  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:36.483369  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:36.492907  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.103606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.579246  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.622793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.581370  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.61497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.582900  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.107973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.592263  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.456417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.692929  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.971066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.792785  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.963238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.892650  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.861393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:36.992658  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.843376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.092502  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.703881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.192758  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.777924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.292658  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.623371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.392796  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.924166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.482073  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:37.482104  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:37.482090  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:37.482436  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:37.482436  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:37.483469  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:37.492725  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.838875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.592598  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.740528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.692763  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.890181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.792775  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.896193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.892756  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.831963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:37.992963  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.063342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.092902  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.969143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.192981  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.043777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.205921  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.180508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:38.207652  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.16121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:38.209436  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.226929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:38.292689  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.766765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.393191  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.189222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.482308  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:38.482308  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:38.482318  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:38.482664  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:38.482666  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:38.483656  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:38.492987  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.162662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.593139  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.209209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.692742  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.879855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.792818  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.982533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.892708  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.80945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:38.992451  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.720195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.092925  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.973894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.192713  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.894928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.292783  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.894608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.395518  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.968673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.482535  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:39.482666  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:39.482680  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:39.482885  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:39.482896  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:39.483833  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:39.492814  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.99397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.592693  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.824733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.693177  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.328991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.794834  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.285698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.892827  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.924393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:39.992537  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.718034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.093163  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.242342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.192579  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.705444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.292789  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.866601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.392811  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.870819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.482796  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:40.482839  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:40.482853  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:40.483071  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:40.483074  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:40.484013  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:40.492710  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.923285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.592817  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.956393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.692854  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.897515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.792609  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.776134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.892923  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.976123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:40.992860  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.994166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.092961  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.053634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.192834  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.988732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.292669  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.737558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.392662  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.799336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.483012  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:41.483030  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:41.483032  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:41.483319  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:41.483326  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:41.484193  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:41.492973  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.171922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.592956  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.073607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.692802  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.951455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.792854  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.858843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.893222  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.295064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:41.992663  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.89063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.092916  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.988053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.192670  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.859463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.293387  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.492359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.392916  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.049403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.483215  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:42.483235  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:42.483215  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:42.483475  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:42.483486  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:42.484429  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:42.492907  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.967324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.592853  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.015557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.693350  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.437056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.793108  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.215682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.892740  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.862995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:42.995668  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (4.78989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.093321  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.430549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.193556  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.612904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.292763  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.933398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.392978  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.09667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.483465  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:43.483500  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:43.483519  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:43.483622  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:43.483622  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:43.484598  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:43.492979  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.113989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.592930  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.090461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.692874  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.9134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.792918  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.01779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.893055  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.074142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:43.992805  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.005781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.092791  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.878395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.192660  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.812566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.292673  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.80209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.392812  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.951365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.483670  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:44.483700  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:44.483689  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:44.483836  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:44.483853  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:44.484798  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:44.492708  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.883823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.592856  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.020712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.693015  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.002652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.792863  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.95178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.892801  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.022537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:44.993296  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.254352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.092825  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.800942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.193792  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.903999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.293104  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.111749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.393214  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.271847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.483894  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:45.483904  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:45.484002  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:45.483907  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:45.484049  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:45.484987  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:45.492769  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.941779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.592742  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.915348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.692932  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.082935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.793584  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.735982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.892638  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.831619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:45.992563  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.751439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:46.064211  123370 pv_controller_base.go:408] resyncing PV controller
I0211 20:52:46.064322  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 31461
I0211 20:52:46.064358  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I0211 20:52:46.064375  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I0211 20:52:46.064385  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I0211 20:52:46.064394  123370 pv_controller.go:797] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I0211 20:52:46.064452  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" with version 31460
I0211 20:52:46.064488  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:52:46.064500  123370 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I0211 20:52:46.064514  123370 pv_controller.go:383] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I0211 20:52:46.064532  123370 pv_controller.go:387] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume is unbound, binding
I0211 20:52:46.064554  123370 pv_controller.go:947] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:52:46.064567  123370 pv_controller.go:846] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:52:46.064651  123370 pv_controller.go:865] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I0211 20:52:46.067714  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (2.613805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:46.068230  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
I0211 20:52:46.068255  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
I0211 20:52:46.068366  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32986
I0211 20:52:46.068395  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound (uid: f55cca89-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:46.068404  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound
I0211 20:52:46.068448  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:52:46.068458  123370 pv_controller.go:636] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I0211 20:52:46.068468  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I0211 20:52:46.068523  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32986
I0211 20:52:46.068552  123370 pv_controller.go:878] updating PersistentVolume[pv-i-pvc-prebound]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:52:46.068564  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
E0211 20:52:46.068752  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:52:46.068802  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:52:46.071054  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.313538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:46.071300  123370 store.go:355] GuaranteedUpdate of /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I0211 20:52:46.071486  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32988
I0211 20:52:46.071512  123370 pv_controller.go:815] volume "pv-i-pvc-prebound" entered phase "Bound"
I0211 20:52:46.071538  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32988
I0211 20:52:46.071560  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound (uid: f55cca89-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:52:46.071568  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound
I0211 20:52:46.071583  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:52:46.071593  123370 pv_controller.go:636] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I0211 20:52:46.071601  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I0211 20:52:46.071608  123370 pv_controller.go:797] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I0211 20:52:46.071870  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.967924ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.072107  123370 pv_controller.go:807] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:52:46.072128  123370 pv_controller.go:956] error binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound": failed saving the volume status: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:52:46.072152  123370 pv_controller_base.go:241] could not sync claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:52:46.073035  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound/status: (3.035465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42940]
I0211 20:52:46.073055  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.856347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42942]
E0211 20:52:46.073336  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:52:46.073441  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
I0211 20:52:46.073462  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
E0211 20:52:46.073600  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:52:46.073629  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:52:46.074965  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-pvc-prebound.15826a883a0f415e: (4.184542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42944]
I0211 20:52:46.076014  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound/status: (2.075023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
E0211 20:52:46.076320  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:52:46.076390  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
I0211 20:52:46.076428  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
E0211 20:52:46.076553  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:52:46.076636  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:52:46.078047  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (3.564049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:46.078854  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-pvc-prebound.15826a883a0f415e: (3.125725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42944]
I0211 20:52:46.079811  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.909023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.081842  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound/status: (4.367776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
E0211 20:52:46.082195  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:52:46.083002  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-pvc-prebound.15826a883a0f415e: (3.272931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42944]
I0211 20:52:46.092794  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.073541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.193802  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.944369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.293184  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.28783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.420887  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (30.035569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.484135  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:46.484140  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:46.484139  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:46.484308  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:46.484294  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:46.485194  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:46.500829  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (10.049068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.585267  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.579664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.587292  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.354078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.589093  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.23949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.592291  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.578136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.693567  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.275616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.792869  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.974356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.892900  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.997575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:46.992832  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.958511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.092848  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.983931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.192691  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.835272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.292688  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.858803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.393032  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.214049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.484369  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:47.484374  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:47.484454  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:47.484471  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:47.484545  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:47.485342  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:47.492493  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.715549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.592641  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.79605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.693120  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.043335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.793149  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.206823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.892651  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.832903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:47.992827  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.984486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.092605  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.789741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.192798  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.958196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.212467  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.989229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:48.214659  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.674114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:48.217870  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.747597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:48.292659  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.838403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.393069  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.136846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.484569  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:48.484588  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:48.484595  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:48.484612  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:48.484685  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:48.485485  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:48.492678  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.846773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.592833  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.013064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.693211  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.996329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.793307  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.418471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.892796  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.984624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:48.992976  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.040728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.093034  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.159166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.192752  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.870334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.292827  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.95702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.395780  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (3.618622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.484823  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:49.484902  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:49.484846  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:49.484924  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:49.484939  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:49.485676  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:49.492740  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.885268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.593284  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.360453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.692925  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.054484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.792858  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.994679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.893151  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.260057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:49.992868  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.978871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.093106  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.922732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.192978  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.106495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.293317  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.046026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.392656  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.801207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.485036  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:50.485094  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:50.485114  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:50.485130  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:50.485144  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:50.485962  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:50.492918  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.789199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.592849  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.022926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.692762  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.893663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.793392  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.558348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.893494  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.180183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:50.992832  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.001529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.092895  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.074721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.192890  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.963139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.293182  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.233986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.393003  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.141528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.485246  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:51.485266  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:51.485242  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:51.485296  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:51.485352  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:51.486135  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:51.492961  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.099466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.593394  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.257288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.693309  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.459586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.792766  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.913914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.893034  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.947988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:51.992611  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.773316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.092773  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.860386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.192847  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.954151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.292889  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.08968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.392819  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.84325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.485403  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:52.485451  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:52.485480  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:52.485494  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:52.485582  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:52.486343  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:52.492538  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.726619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.596764  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (5.84406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.692700  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.847294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.792760  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.885429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.892849  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.059241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:52.992678  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.917898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.092735  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.879878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.192962  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.075085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.292993  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.039546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.393307  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.450156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.485633  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:53.485641  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:53.485661  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:53.485660  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:53.485809  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:53.486562  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:53.493446  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.589431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.593158  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.27383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.692799  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.886735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.792934  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.988017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.893110  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.023717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:53.992830  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.893636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.093708  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.227975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.192854  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.98189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.292861  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.940609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.392828  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.928933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.485862  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:54.485880  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:54.485932  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:54.485962  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:54.486022  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:54.486795  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:54.492812  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.989413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.593010  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.150253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.692817  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.012278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.792581  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.735162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.892603  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.7696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:54.992936  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.166736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.093126  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.140374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.192746  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.887899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.293035  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.183302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.392907  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.019878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.486085  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:55.486099  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:55.486103  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:55.486118  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:55.486190  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:55.487057  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:55.493026  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.237909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.592859  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.033552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.692618  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.734759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.793156  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.245233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.893277  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.304175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:55.992670  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.779021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.092669  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.781773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.192930  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.091117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.293067  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.199011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.393179  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.275031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.486307  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:56.486329  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:56.486332  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:56.486363  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:56.486445  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:56.487175  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:56.493195  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.984239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.591470  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.604338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.592353  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.625716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:52:56.593506  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.468166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.595121  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.176182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.693356  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.101524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.792969  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.067268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.893041  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.163829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:56.992642  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.825493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.093142  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.283615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.193164  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.315233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.292738  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.910759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.392701  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.761862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.486620  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:57.486620  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:57.486635  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:57.486695  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:57.486802  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:57.487343  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:57.493079  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.01789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.592774  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.907619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.692662  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.805677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.792931  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.994037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.892934  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.960019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:57.993348  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.009882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.092746  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.911988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.193070  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.89054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.220282  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.633683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:58.222031  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.299428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:58.224404  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.828041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:52:58.292729  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.908509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.393375  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.441932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.486879  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:58.486879  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:58.487020  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:58.486903  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:58.486899  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:58.487509  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:58.492884  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.040546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.592451  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.599339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.692614  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.796715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.796841  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (5.897914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.893487  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.653119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:58.992753  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.896524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.093157  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.312117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.192863  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.970871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.292899  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.987657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.392906  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.051426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.487144  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:59.487172  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:59.487143  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:59.487167  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:59.487181  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:59.487702  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:52:59.492998  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.097446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.592887  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.03756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.695401  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.995513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.792925  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.04855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.893020  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.168212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:52:59.993031  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.165819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.092765  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.892148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.193091  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.910128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.292806  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.951094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.392840  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.937457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.487372  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:00.487367  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:00.487387  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:00.487391  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:00.487453  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:00.487885  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:00.492665  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.859069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.593368  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.442336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.693086  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (2.227341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.792764  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.823405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.892795  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.945569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:00.992822  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (1.981671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:01.064482  123370 pv_controller_base.go:408] resyncing PV controller
I0211 20:53:01.064586  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32988
I0211 20:53:01.064648  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound (uid: f55cca89-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:01.064664  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound
I0211 20:53:01.064678  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:53:01.064693  123370 pv_controller.go:636] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I0211 20:53:01.064710  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I0211 20:53:01.064820  123370 pv_controller.go:797] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I0211 20:53:01.064734  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" with version 31460
I0211 20:53:01.064851  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:53:01.064868  123370 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I0211 20:53:01.064887  123370 pv_controller.go:383] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound (uid: f55cca89-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:01.064907  123370 pv_controller.go:407] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume already bound, finishing the binding
I0211 20:53:01.064916  123370 pv_controller.go:947] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:53:01.064934  123370 pv_controller.go:846] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:53:01.065001  123370 pv_controller.go:858] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:53:01.065028  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I0211 20:53:01.065037  123370 pv_controller.go:797] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I0211 20:53:01.065044  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I0211 20:53:01.065059  123370 pv_controller.go:917] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:53:01.067835  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
I0211 20:53:01.067866  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound
I0211 20:53:01.068019  123370 scheduler_binder.go:665] All bound volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound" match with Node "node-1"
I0211 20:53:01.068066  123370 scheduler_binder.go:659] PersistentVolume "pv-i-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound": No matching NodeSelectorTerms
I0211 20:53:01.068161  123370 scheduler_binder.go:269] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound", node "node-1"
I0211 20:53:01.068189  123370 scheduler_binder.go:279] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I0211 20:53:01.068258  123370 factory.go:733] Attempting to bind pod-i-pvc-prebound to node-1
I0211 20:53:01.133909  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-prebound: (68.389621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:01.133999  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound/binding: (65.345133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:53:01.134271  123370 scheduler.go:571] pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pvc-prebound is bound successfully on node node-1, 2 nodes evaluated, 1 nodes were found feasible
I0211 20:53:01.134317  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" with version 35391
I0211 20:53:01.134346  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: bound to "pv-i-pvc-prebound"
I0211 20:53:01.134357  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound] status: set phase Bound
I0211 20:53:01.134471  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pvc-prebound: (43.462433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.136309  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-prebound: (1.331239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.136513  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.915988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:53:01.137156  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-prebound/status: (2.467325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:01.137436  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" with version 35394
I0211 20:53:01.137458  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" entered phase "Bound"
I0211 20:53:01.137472  123370 pv_controller.go:973] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:53:01.137495  123370 pv_controller.go:974] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound (uid: f55cca89-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:01.137509  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I0211 20:53:01.137538  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" with version 35394
I0211 20:53:01.137547  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I0211 20:53:01.137593  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: volume "pv-i-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound (uid: f55cca89-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:01.137608  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: claim is already correctly bound
I0211 20:53:01.137617  123370 pv_controller.go:947] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:53:01.137627  123370 pv_controller.go:846] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:53:01.137681  123370 pv_controller.go:858] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:53:01.137698  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I0211 20:53:01.137708  123370 pv_controller.go:797] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I0211 20:53:01.137716  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I0211 20:53:01.137730  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound]: already bound to "pv-i-pvc-prebound"
I0211 20:53:01.137738  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound] status: set phase Bound
I0211 20:53:01.137766  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound] status: phase Bound already set
I0211 20:53:01.137777  123370 pv_controller.go:973] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound"
I0211 20:53:01.137796  123370 pv_controller.go:974] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound (uid: f55cca89-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:01.137807  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I0211 20:53:01.138882  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-i-pvc-prebound: (1.789295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0211 20:53:01.144882  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (5.349364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:01.150005  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (4.74916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:01.150321  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" deleted
I0211 20:53:01.150403  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 32988
I0211 20:53:01.150474  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound (uid: f55cca89-2e3e-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:01.150493  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound
I0211 20:53:01.152296  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-prebound: (1.312542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.152570  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound not found
I0211 20:53:01.152600  123370 pv_controller.go:592] volume "pv-i-pvc-prebound" is released and reclaim policy "Retain" will be executed
I0211 20:53:01.152613  123370 pv_controller.go:794] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released
I0211 20:53:01.154515  123370 store.go:355] GuaranteedUpdate of /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I0211 20:53:01.154556  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (4.065907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:01.154715  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (1.793586ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.154936  123370 pv_controller.go:807] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": StorageError: invalid object, Code: 4, Key: /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-i-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f55c66fe-2e3e-11e9-8784-0242ac110002, UID in object meta: 
I0211 20:53:01.154982  123370 pv_controller_base.go:201] could not sync volume "pv-i-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": StorageError: invalid object, Code: 4, Key: /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-i-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f55c66fe-2e3e-11e9-8784-0242ac110002, UID in object meta: 
I0211 20:53:01.155021  123370 pv_controller_base.go:211] volume "pv-i-pvc-prebound" deleted
I0211 20:53:01.155052  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound" was already processed
I0211 20:53:01.160873  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (5.769512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.161108  123370 volume_binding_test.go:193] Running test immediate pv prebound
I0211 20:53:01.163198  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.855696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.165072  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.430988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.167116  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.60679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.168969  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-i-prebound", version 35416
I0211 20:53:01.169014  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: )", boundByController: false
I0211 20:53:01.169024  123370 pv_controller.go:523] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound
I0211 20:53:01.169030  123370 pv_controller.go:794] updating PersistentVolume[pv-i-prebound]: set phase Available
I0211 20:53:01.169049  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.449444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.169601  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound", version 35417
I0211 20:53:01.169659  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:01.169697  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Pending, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: )", boundByController: false
I0211 20:53:01.169722  123370 pv_controller.go:947] binding volume "pv-i-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:01.169747  123370 pv_controller.go:846] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:01.169795  123370 pv_controller.go:865] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0211 20:53:01.172049  123370 store.go:355] GuaranteedUpdate of /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-i-prebound failed because of a conflict, going to retry
I0211 20:53:01.172153  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.473312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0211 20:53:01.172374  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.011753ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:01.172472  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 35419
I0211 20:53:01.172509  123370 pv_controller.go:815] volume "pv-i-prebound" entered phase "Available"
I0211 20:53:01.172541  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 35419
I0211 20:53:01.172574  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: )", boundByController: false
I0211 20:53:01.172586  123370 pv_controller.go:523] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound
I0211 20:53:01.172592  123370 pv_controller.go:794] updating PersistentVolume[pv-i-prebound]: set phase Available
I0211 20:53:01.172598  123370 pv_controller.go:797] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0211 20:53:01.172682  123370 pv_controller.go:868] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:53:01.172708  123370 pv_controller.go:950] error binding volume "pv-i-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:53:01.172720  123370 pv_controller_base.go:241] could not sync claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:53:01.172763  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (2.744642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0211 20:53:01.173007  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:01.173040  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:01.173131  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.173242  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.173260  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.173284  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.173356  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.173391  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:01.173493  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:01.173548  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:01.175341  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.464066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:01.176081  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.880301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:01.176230  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound/status: (2.376859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
E0211 20:53:01.176618  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:01.176705  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:01.176733  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:01.176813  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.176837  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.176908  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.176935  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.176962  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.176984  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:01.177069  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:01.177133  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:01.178772  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.298292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:01.179164  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound/status: (1.711342ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
E0211 20:53:01.179341  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:01.179470  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:01.179494  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:01.179569  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.179681  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.179720  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.179782  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.179876  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:01.179921  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:01.179991  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:01.180039  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:01.181170  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-pv-prebound.15826a8e4cb28184: (3.098523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0211 20:53:01.181640  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.278744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:01.182500  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound/status: (2.121735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
E0211 20:53:01.182831  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:01.184357  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-pv-prebound.15826a8e4cb28184: (2.456147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0211 20:53:01.275335  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.728123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:01.375614  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.084599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:01.475751  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.159902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:01.487618  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:01.487643  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:01.487618  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:01.487624  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:01.487638  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:01.488098  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:01.576523  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.868483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:01.675504  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.894579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:01.775452  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.905252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:01.875604  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.058164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:01.975134  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.619015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.075373  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.842447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.175516  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.851242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.275138  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.655185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.375380  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.854463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.476164  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.550135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.487853  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:02.487853  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:02.487858  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:02.487873  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:02.487873  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:02.488288  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:02.575357  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.81503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.675331  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.75973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.775315  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.851497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.875448  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.890935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:02.975272  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.780088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.075672  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.060556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.175581  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.042979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.275752  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.10524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.375703  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.029441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.475485  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.974799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.488053  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:03.488068  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:03.488132  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:03.488137  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:03.488157  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:03.488480  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:03.576046  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.421469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.675255  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.705726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.775572  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.046304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.875373  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.895886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:03.975609  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.085554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.075572  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.058025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.175537  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.962315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.275585  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.063998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.375512  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.932353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.475705  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.177045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.488269  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:04.488287  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:04.488269  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:04.488308  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:04.488291  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:04.488657  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:04.575587  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.101313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.675659  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.982539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.775186  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.623065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.875636  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.101478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:04.975370  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.80826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.075917  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.339591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.175623  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.99126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.275280  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.689901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.375576  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.060214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.475209  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.627771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.488485  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:05.488504  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:05.488526  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:05.488489  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:05.488489  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:05.488886  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:05.575557  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.977821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.675889  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.737435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.775312  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.8557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.875508  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.014045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:05.975487  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.812506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.075438  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.839732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.175484  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.937465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.275514  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.996776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.375451  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.929039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.475534  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.998001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.488699  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:06.488720  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:06.488699  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:06.488702  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:06.488770  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:06.489085  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:06.575455  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.940025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.597018  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.288632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.598840  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.320222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.600490  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.098933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.675354  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.858674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.775648  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.120655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.875359  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.831315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:06.975877  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.252947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.075813  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.282893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.175684  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.070248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.275658  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.085005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.375288  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.779773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.475399  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.847179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.488927  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:07.489035  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:07.489049  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:07.489050  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:07.489053  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:07.489342  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:07.575523  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.963408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.675479  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.967824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.775690  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.081955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.875713  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.073022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:07.975498  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.009866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.075842  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.131369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.176113  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.088227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.200616  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.474759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:08.203826  123370 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.187185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:08.205376  123370 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.083213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:08.226843  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.685213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:08.228691  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.327574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:08.230369  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.224747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:08.275282  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.698255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.375443  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.863423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.475791  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.21146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.489148  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:08.489191  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:08.489225  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:08.489295  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:08.489399  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:08.489532  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:08.575796  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.241188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.676042  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.429825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.775906  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.965909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.875688  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.135455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:08.975833  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.283298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.078940  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.03222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.175566  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.985479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.275964  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.295069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.375690  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.969824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.476046  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.376665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.489328  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:09.489389  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:09.489395  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:09.489460  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:09.489623  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:09.489697  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:09.576077  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.427606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.675569  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.035801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.775488  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.067066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.875238  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.833559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:09.976141  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.54248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.075450  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.938054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.175438  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.855063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.275140  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.597181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.375388  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.865928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.475357  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.785791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.489523  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:10.489614  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:10.489617  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:10.489654  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:10.489790  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:10.489802  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:10.575840  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.238035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.675794  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.192631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.775194  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.695999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.875568  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.069495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:10.975375  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.889336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.075720  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.202919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.175520  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.00297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.275173  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.656843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.375885  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.244215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.475374  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.829894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.489801  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:11.489823  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:11.489806  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:11.489852  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:11.489993  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:11.490026  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:11.575714  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.781431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.675297  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.792682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.775898  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.384229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.875646  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.14164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:11.975379  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.833141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.075544  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.023955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.175707  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.119971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.275259  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.781372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.375165  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.670945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.475706  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.101584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.490011  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:12.490011  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:12.490016  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:12.490020  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:12.490146  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:12.490160  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:12.575392  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.831794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.675261  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.706933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.775265  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.770474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.875384  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.830026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:12.975483  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.787773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.075475  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.944727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.178641  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (4.832775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.276053  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.298049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.375744  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.127609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.475906  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.218812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.490371  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:13.490436  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:13.490479  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:13.490483  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:13.490369  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:13.490549  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:13.576075  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.369877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.675965  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.234127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.776357  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.728703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.876505  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.892496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:13.975656  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.011945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.076156  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.505015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.175314  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.741236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.275637  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.062037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.375693  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.098083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.475744  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.131611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.490723  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:14.490729  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:14.490780  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:14.490811  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:14.493822  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:14.493836  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:14.575496  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.967881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.675990  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.307333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.775940  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.192464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.875858  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.282204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:14.975710  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.95991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.075747  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.092292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.176026  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.323721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.275430  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.820473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.375720  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.171947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.475388  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.785304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.491007  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:15.491012  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:15.491026  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:15.491036  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:15.494134  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:15.494137  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:15.575502  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.872837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.675837  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.265472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.775506  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.98943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.876066  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.417975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:15.975460  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.744854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:16.064674  123370 pv_controller_base.go:408] resyncing PV controller
I0211 20:53:16.064790  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 35419
I0211 20:53:16.064869  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" with version 35417
I0211 20:53:16.064830  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: )", boundByController: false
I0211 20:53:16.065013  123370 pv_controller.go:523] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound
I0211 20:53:16.065025  123370 pv_controller.go:794] updating PersistentVolume[pv-i-prebound]: set phase Available
I0211 20:53:16.065046  123370 pv_controller.go:797] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0211 20:53:16.064986  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:16.065111  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: )", boundByController: false
I0211 20:53:16.065131  123370 pv_controller.go:947] binding volume "pv-i-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.065138  123370 pv_controller.go:846] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.065178  123370 pv_controller.go:865] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0211 20:53:16.068027  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.413436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:16.068311  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 36016
I0211 20:53:16.068348  123370 pv_controller.go:878] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.068357  123370 pv_controller.go:794] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0211 20:53:16.068617  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 36016
I0211 20:53:16.068665  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: 04e89300-2e3f-11e9-8784-0242ac110002)", boundByController: false
I0211 20:53:16.068675  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound
I0211 20:53:16.068737  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:16.068755  123370 pv_controller.go:623] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0211 20:53:16.068752  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:16.068779  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:16.068880  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.068912  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.069015  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.069037  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.069098  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.069126  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:16.069213  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:16.069279  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:16.070909  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.225618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:16.071150  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 36017
I0211 20:53:16.071179  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: 04e89300-2e3f-11e9-8784-0242ac110002)", boundByController: false
I0211 20:53:16.071186  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 36017
I0211 20:53:16.071214  123370 pv_controller.go:815] volume "pv-i-prebound" entered phase "Bound"
I0211 20:53:16.071228  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0211 20:53:16.071244  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.607738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:16.071291  123370 pv_controller.go:917] volume "pv-i-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.071195  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound
I0211 20:53:16.071442  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:16.071452  123370 pv_controller.go:623] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0211 20:53:16.071790  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound/status: (1.870828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44252]
E0211 20:53:16.072142  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:16.072231  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:16.072240  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:16.072333  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.072343  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.072464  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.072485  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.072499  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.072520  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:16.072568  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:16.072598  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:16.073353  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-pv-prebound.15826a8e4cb28184: (3.082575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.074009  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-pv-prebound: (2.342373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:16.074018  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.202071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44252]
I0211 20:53:16.074371  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" with version 36020
I0211 20:53:16.074437  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound/status: (1.570155ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0211 20:53:16.074451  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I0211 20:53:16.074466  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound] status: set phase Bound
E0211 20:53:16.074717  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:16.074812  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:16.074833  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound
I0211 20:53:16.074902  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (1.382238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44256]
I0211 20:53:16.075030  123370 scheduler_binder.go:659] PersistentVolume "pv-i-prebound", Node "node-2" mismatch for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound": No matching NodeSelectorTerms
I0211 20:53:16.075103  123370 scheduler_binder.go:665] All bound volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound" match with Node "node-1"
I0211 20:53:16.075189  123370 scheduler_binder.go:269] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound", node "node-1"
I0211 20:53:16.075215  123370 scheduler_binder.go:279] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound", node "node-1": all PVCs bound and nothing to do
I0211 20:53:16.075277  123370 factory.go:733] Attempting to bind pod-i-pv-prebound to node-1
I0211 20:53:16.076489  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-pv-prebound.15826a8e4cb28184: (2.377969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.077089  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-pv-prebound/status: (2.363234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44252]
I0211 20:53:16.077089  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound/binding: (1.464553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44256]
I0211 20:53:16.077440  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" with version 36022
I0211 20:53:16.077478  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" entered phase "Bound"
I0211 20:53:16.077495  123370 pv_controller.go:973] volume "pv-i-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.077507  123370 pv_controller.go:974] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: 04e89300-2e3f-11e9-8784-0242ac110002)", boundByController: false
I0211 20:53:16.077583  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0211 20:53:16.077622  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" with version 36022
I0211 20:53:16.077670  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0211 20:53:16.077697  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: 04e89300-2e3f-11e9-8784-0242ac110002)", boundByController: false
I0211 20:53:16.077712  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: claim is already correctly bound
I0211 20:53:16.077722  123370 pv_controller.go:947] binding volume "pv-i-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.077728  123370 scheduler.go:571] pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-pv-prebound is bound successfully on node node-1, 2 nodes evaluated, 1 nodes were found feasible
I0211 20:53:16.077734  123370 pv_controller.go:846] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.077778  123370 pv_controller.go:858] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.077796  123370 pv_controller.go:794] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0211 20:53:16.077804  123370 pv_controller.go:797] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I0211 20:53:16.077812  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0211 20:53:16.077829  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I0211 20:53:16.077836  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound] status: set phase Bound
I0211 20:53:16.077855  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound] status: phase Bound already set
I0211 20:53:16.077878  123370 pv_controller.go:973] volume "pv-i-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.077897  123370 pv_controller.go:974] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: 04e89300-2e3f-11e9-8784-0242ac110002)", boundByController: false
I0211 20:53:16.077912  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0211 20:53:16.077939  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" with version 36022
I0211 20:53:16.077988  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0211 20:53:16.078006  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: 04e89300-2e3f-11e9-8784-0242ac110002)", boundByController: false
I0211 20:53:16.078030  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: claim is already correctly bound
I0211 20:53:16.078050  123370 pv_controller.go:947] binding volume "pv-i-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.078067  123370 pv_controller.go:846] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.078105  123370 pv_controller.go:858] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.078128  123370 pv_controller.go:794] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0211 20:53:16.078141  123370 pv_controller.go:797] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I0211 20:53:16.078145  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0211 20:53:16.078154  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I0211 20:53:16.078162  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound] status: set phase Bound
I0211 20:53:16.078201  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound] status: phase Bound already set
I0211 20:53:16.078211  123370 pv_controller.go:973] volume "pv-i-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound"
I0211 20:53:16.078228  123370 pv_controller.go:974] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: 04e89300-2e3f-11e9-8784-0242ac110002)", boundByController: false
I0211 20:53:16.078234  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0211 20:53:16.079388  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.386999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.175709  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-pv-prebound: (2.008914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.178099  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-pv-prebound: (1.575284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.180478  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-i-prebound: (1.773829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.187862  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (6.823236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.192594  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (4.118951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.193107  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" deleted
I0211 20:53:16.193167  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 36017
I0211 20:53:16.193194  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: 04e89300-2e3f-11e9-8784-0242ac110002)", boundByController: false
I0211 20:53:16.193213  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound
I0211 20:53:16.193225  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound not found
I0211 20:53:16.193241  123370 pv_controller.go:592] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I0211 20:53:16.193248  123370 pv_controller.go:794] updating PersistentVolume[pv-i-prebound]: set phase Released
I0211 20:53:16.195496  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.930188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:16.195758  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 36028
I0211 20:53:16.195782  123370 pv_controller.go:815] volume "pv-i-prebound" entered phase "Released"
I0211 20:53:16.195790  123370 pv_controller.go:1027] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I0211 20:53:16.195811  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-prebound" with version 36028
I0211 20:53:16.195833  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: phase: Released, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound (uid: 04e89300-2e3f-11e9-8784-0242ac110002)", boundByController: false
I0211 20:53:16.195847  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound
I0211 20:53:16.195857  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound not found
I0211 20:53:16.195869  123370 pv_controller.go:1027] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I0211 20:53:16.197070  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (4.095876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.197762  123370 pv_controller_base.go:211] volume "pv-i-prebound" deleted
I0211 20:53:16.197809  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-pv-prebound" was already processed
I0211 20:53:16.204031  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.558281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.204176  123370 volume_binding_test.go:193] Running test wait cannot bind
I0211 20:53:16.205882  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.50405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.207631  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.325157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.210138  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.888412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.210392  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind", version 36034
I0211 20:53:16.210443  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:16.210460  123370 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind]: no volume found
I0211 20:53:16.210491  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind] status: set phase Pending
I0211 20:53:16.210515  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind] status: phase Pending already set
I0211 20:53:16.210599  123370 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002", Name:"pvc-w-cannotbind", UID:"0ddfa414-2e3f-11e9-8784-0242ac110002", APIVersion:"v1", ResourceVersion:"36034", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0211 20:53:16.212738  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.918669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.212752  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (1.804514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:16.212908  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind
I0211 20:53:16.212925  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind
I0211 20:53:16.213047  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.213173  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.213203  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.213049  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.213241  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind" on node "node-2"
I0211 20:53:16.213261  123370 scheduler_binder.go:736] storage class "wait-qzpl" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind" does not support dynamic provisioning
I0211 20:53:16.213381  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.213404  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.213459  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind" on node "node-1"
I0211 20:53:16.213477  123370 scheduler_binder.go:736] storage class "wait-qzpl" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind" does not support dynamic provisioning
I0211 20:53:16.213517  123370 factory.go:647] Unable to schedule volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I0211 20:53:16.213554  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:16.215278  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind: (1.395924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:16.215578  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.427251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44258]
I0211 20:53:16.216138  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind/status: (2.338007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44254]
I0211 20:53:16.218148  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind: (1.246027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44258]
I0211 20:53:16.218530  123370 generic_scheduler.go:306] Preemption will not help schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind on any node.
I0211 20:53:16.218715  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind
I0211 20:53:16.218743  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind
I0211 20:53:16.218819  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.218873  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.218912  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.218939  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.218974  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind" on node "node-1"
I0211 20:53:16.219059  123370 scheduler_binder.go:736] storage class "wait-qzpl" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind" does not support dynamic provisioning
I0211 20:53:16.219073  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.219090  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:16.219116  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind" on node "node-2"
I0211 20:53:16.219180  123370 scheduler_binder.go:736] storage class "wait-qzpl" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind" does not support dynamic provisioning
I0211 20:53:16.219256  123370 factory.go:647] Unable to schedule volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I0211 20:53:16.219288  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:16.221369  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind: (1.741852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44258]
I0211 20:53:16.222103  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind/status: (2.477627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:16.223198  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-w-cannotbind.15826a91cd26d064: (2.62615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.224037  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind: (1.33288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0211 20:53:16.224504  123370 generic_scheduler.go:306] Preemption will not help schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind on any node.
I0211 20:53:16.315545  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-cannotbind: (2.005774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.317718  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-cannotbind: (1.460416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.322548  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind
I0211 20:53:16.322604  123370 scheduler.go:449] Skip schedule deleting pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-cannotbind
I0211 20:53:16.323871  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (5.672521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.324786  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.832104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44258]
I0211 20:53:16.328485  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (4.123234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.328589  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-cannotbind" deleted
I0211 20:53:16.330162  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (1.133719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.337926  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.232868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.338128  123370 volume_binding_test.go:193] Running test wait pvc prebound
I0211 20:53:16.339693  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.237903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.341561  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.380494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.343919  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.812971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.344040  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-w-pvc-prebound", version 36048
I0211 20:53:16.344089  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I0211 20:53:16.344101  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I0211 20:53:16.344241  123370 pv_controller.go:794] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I0211 20:53:16.346565  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound", version 36049
I0211 20:53:16.346600  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:53:16.346612  123370 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I0211 20:53:16.346625  123370 pv_controller.go:383] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I0211 20:53:16.346638  123370 pv_controller.go:387] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: volume is unbound, binding
I0211 20:53:16.346652  123370 pv_controller.go:947] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:16.346688  123370 pv_controller.go:846] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:16.346734  123370 pv_controller.go:865] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I0211 20:53:16.346740  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (2.172375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.346906  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.377914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44258]
I0211 20:53:16.347148  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36050
I0211 20:53:16.347183  123370 pv_controller.go:815] volume "pv-w-pvc-prebound" entered phase "Available"
I0211 20:53:16.347201  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36050
I0211 20:53:16.347209  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I0211 20:53:16.347232  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I0211 20:53:16.347237  123370 pv_controller.go:794] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I0211 20:53:16.347242  123370 pv_controller.go:797] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I0211 20:53:16.348602  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.390966ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44258]
I0211 20:53:16.348864  123370 pv_controller.go:868] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:53:16.348901  123370 pv_controller.go:950] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:53:16.348923  123370 pv_controller_base.go:241] could not sync claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:53:16.349282  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (1.811616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44262]
I0211 20:53:16.349587  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
I0211 20:53:16.349604  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
E0211 20:53:16.349843  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:16.349895  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:16.351463  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.277288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.352273  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.633519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:16.352519  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound/status: (2.169593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44258]
E0211 20:53:16.352772  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:16.352868  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
I0211 20:53:16.352933  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
E0211 20:53:16.353088  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:16.353145  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:16.354763  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.239031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:16.355398  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound/status: (2.026812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
E0211 20:53:16.355703  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:16.356401  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-w-pvc-prebound.15826a91d5472955: (2.47943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44266]
I0211 20:53:16.452056  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.835206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:16.491335  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:16.491335  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:16.491366  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:16.491361  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:16.494337  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:16.494376  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:16.552881  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.404008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:16.603034  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.76002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:16.604877  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.363909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:16.606656  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.390294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:16.652518  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.979349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:16.752242  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.954506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:16.852301  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.056306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:16.953322  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.820457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.052505  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.222119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.151997  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.646569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.252296  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.006239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.352739  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.19163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.452369  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.025163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.491594  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:17.491631  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:17.491615  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:17.491664  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:17.494555  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:17.494567  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:17.553204  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.711883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.652094  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.740708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.752691  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.329971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.852759  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.252049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:17.952855  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.520088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.053173  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.596944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.152755  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.267886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.233006  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.711484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:18.235269  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.695123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:18.236977  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.164689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:18.252769  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.072183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.352043  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.740038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.453664  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.970764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.491860  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:18.491872  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:18.491890  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:18.491911  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:18.494787  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:18.494795  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:18.553837  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.459719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.652493  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.083775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.756243  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (5.947412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.852697  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.21507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:18.952257  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.941105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.052863  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.402035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.152103  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.757926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.252675  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.246226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.352743  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.307104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.452477  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.111764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.492285  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:19.492288  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:19.492347  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:19.492353  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:19.494999  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:19.495103  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:19.552509  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.166152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.652482  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.051457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.752095  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.773955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.852265  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.910535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:19.952244  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.94045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.052228  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.866716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.152830  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.438188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.252563  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.152153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.352290  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.948665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.452455  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.156026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.492536  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:20.492552  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:20.492579  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:20.492683  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:20.495172  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:20.495302  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:20.552817  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.424183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.652756  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.376787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.753046  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.671125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.852505  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.241299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:20.952587  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.303158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.051983  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.698724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.152270  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.916923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.252796  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.231224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.352259  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.86503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.452317  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.967821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.492745  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:21.492752  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:21.492754  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:21.492877  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:21.495379  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:21.495545  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:21.552392  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.924384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.652432  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.880939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.752087  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.771035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.852400  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.950639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:21.952149  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.785468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.052489  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.108667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.152586  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.230525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.252883  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.428733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.352507  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.017768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.452529  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.288481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.492997  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:22.493026  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:22.493018  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:22.493157  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:22.495565  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:22.495988  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:22.552531  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.181539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.652474  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.033179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.752564  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.198398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.852694  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.34997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:22.952487  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.090893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.052277  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.984108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.152469  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.985528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.252585  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.114635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.352340  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.004908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.452527  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.191782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.493275  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:23.493345  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:23.493277  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:23.493289  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:23.495666  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:23.496200  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:23.552551  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.127356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.652859  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.409872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.752174  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.899103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.852398  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.980114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:23.952635  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.223379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.053160  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.760115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.152588  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.194904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.252847  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.251857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.353136  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.58626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.452575  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.217084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.493561  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:24.493608  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:24.493624  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:24.493588  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:24.495885  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:24.496428  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:24.552580  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.058624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.652356  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.965877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.752312  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.980571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.852908  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.389771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:24.953609  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (3.141393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.052381  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.044706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.152578  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.111397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.252545  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.015268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.353799  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.984698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.452781  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.394061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.493846  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:25.493857  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:25.493857  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:25.493863  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:25.496093  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:25.496666  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:25.553455  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.925277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.652982  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.485044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.752213  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.683995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.852303  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.966278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:25.952602  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.23801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.052507  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.188846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.152486  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.158734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.252381  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.00819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.353869  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (3.558822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.452622  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.308829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.494188  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:26.494296  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:26.494314  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:26.494329  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:26.496272  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:26.496909  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:26.553051  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.743236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.586309  123370 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.553442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.588130  123370 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.297174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.590330  123370 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.589065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.609138  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.544493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.611486  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.659069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.613335  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.263282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.652586  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.178418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.752632  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.209594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.852559  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.153583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:26.953262  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.155247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.052364  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.136619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.164691  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (14.234217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.253601  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (3.076885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.353121  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.71243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.452522  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.166428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.494533  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:27.494577  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:27.494533  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:27.494556  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:27.496515  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:27.497149  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:27.552582  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.095993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.657201  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (6.319275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.752773  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.423535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.852755  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.296948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:27.952912  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.429207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.053727  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.355392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.164213  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (13.856925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.240183  123370 wrap.go:47] GET /api/v1/namespaces/default: (2.279328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:28.244674  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (3.9911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:28.246796  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.556176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:28.252111  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.870639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.361506  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (11.176698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.452534  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.27123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.494711  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:28.494709  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:28.494712  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:28.494730  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:28.496704  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:28.497396  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:28.552358  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.092912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.673051  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (22.504171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.753235  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.707499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.852896  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.507285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:28.952330  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.928632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.052831  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.516236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.168360  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (18.017972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.259171  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (8.757911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.356877  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (6.463375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.452741  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.460331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.494934  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:29.495091  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:29.495110  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:29.495121  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:29.496941  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:29.497523  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:29.553048  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.418261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.652393  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.113976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.752437  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.850207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.852467  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.131266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:29.952655  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.280828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.052270  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.808196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.152204  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.961901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.252510  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.146002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.352109  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.889429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.452494  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.272478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.495250  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:30.495283  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:30.495250  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:30.495268  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:30.497204  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:30.497758  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:30.551953  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.714824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.652102  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.865189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.752474  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.119372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.852211  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.914929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:30.952326  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.062504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:31.052283  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.977491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:31.064937  123370 pv_controller_base.go:408] resyncing PV controller
I0211 20:53:31.065049  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36050
I0211 20:53:31.065088  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I0211 20:53:31.065101  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I0211 20:53:31.065113  123370 pv_controller.go:794] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I0211 20:53:31.065125  123370 pv_controller.go:797] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I0211 20:53:31.065141  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" with version 36049
I0211 20:53:31.065162  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:53:31.065174  123370 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I0211 20:53:31.065186  123370 pv_controller.go:383] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I0211 20:53:31.065198  123370 pv_controller.go:387] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: volume is unbound, binding
I0211 20:53:31.065263  123370 pv_controller.go:947] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.065288  123370 pv_controller.go:846] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.065324  123370 pv_controller.go:865] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I0211 20:53:31.067565  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.895113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:31.067809  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36279
I0211 20:53:31.067849  123370 pv_controller.go:878] updating PersistentVolume[pv-w-pvc-prebound]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.067861  123370 pv_controller.go:794] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0211 20:53:31.067999  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36279
I0211 20:53:31.068033  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
I0211 20:53:31.068043  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound (uid: 0df466bf-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.068049  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
I0211 20:53:31.068053  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound
I0211 20:53:31.068067  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:53:31.068087  123370 pv_controller.go:636] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I0211 20:53:31.068095  123370 pv_controller.go:794] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
E0211 20:53:31.068292  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:31.068330  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:31.069886  123370 store.go:355] GuaranteedUpdate of /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I0211 20:53:31.070131  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (1.666481ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:31.070161  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.049377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:31.070348  123370 pv_controller.go:807] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:53:31.070371  123370 pv_controller_base.go:201] could not sync volume "pv-w-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0211 20:53:31.070398  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36280
I0211 20:53:31.070468  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound (uid: 0df466bf-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.070495  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound
I0211 20:53:31.070511  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0211 20:53:31.070524  123370 pv_controller.go:636] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I0211 20:53:31.070533  123370 pv_controller.go:794] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0211 20:53:31.070540  123370 pv_controller.go:797] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I0211 20:53:31.070367  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36280
I0211 20:53:31.070564  123370 pv_controller.go:815] volume "pv-w-pvc-prebound" entered phase "Bound"
I0211 20:53:31.070579  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I0211 20:53:31.070597  123370 pv_controller.go:917] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.071071  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.162064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.071083  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound/status: (2.195004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44588]
E0211 20:53:31.071480  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:31.071563  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
I0211 20:53:31.071587  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
E0211 20:53:31.071790  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:31.071855  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:31.072389  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-w-pvc-prebound.15826a91d5472955: (3.145701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.072709  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-prebound: (1.702039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:31.072919  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" with version 36283
I0211 20:53:31.072970  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: bound to "pv-w-pvc-prebound"
I0211 20:53:31.072979  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound] status: set phase Bound
I0211 20:53:31.073272  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (1.175433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.073774  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound/status: (1.668955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
E0211 20:53:31.074043  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:31.074123  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
I0211 20:53:31.074147  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound
I0211 20:53:31.074297  123370 scheduler_binder.go:665] All bound volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound" match with Node "node-1"
I0211 20:53:31.074461  123370 scheduler_binder.go:659] PersistentVolume "pv-w-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound": No matching NodeSelectorTerms
I0211 20:53:31.074540  123370 scheduler_binder.go:269] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound", node "node-1"
I0211 20:53:31.074568  123370 scheduler_binder.go:279] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I0211 20:53:31.074642  123370 factory.go:733] Attempting to bind pod-w-pvc-prebound to node-1
I0211 20:53:31.075492  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-prebound/status: (2.266966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.075677  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-w-pvc-prebound.15826a91d5472955: (2.487882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44264]
I0211 20:53:31.075710  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" with version 36284
I0211 20:53:31.075734  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" entered phase "Bound"
I0211 20:53:31.075749  123370 pv_controller.go:973] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.075771  123370 pv_controller.go:974] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound (uid: 0df466bf-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.075787  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I0211 20:53:31.075829  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" with version 36284
I0211 20:53:31.075862  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I0211 20:53:31.075882  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: volume "pv-w-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound (uid: 0df466bf-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.075904  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: claim is already correctly bound
I0211 20:53:31.075914  123370 pv_controller.go:947] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.075939  123370 pv_controller.go:846] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.075974  123370 pv_controller.go:858] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.075983  123370 pv_controller.go:794] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0211 20:53:31.075990  123370 pv_controller.go:797] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I0211 20:53:31.075997  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I0211 20:53:31.076014  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: already bound to "pv-w-pvc-prebound"
I0211 20:53:31.076033  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound] status: set phase Bound
I0211 20:53:31.076050  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound] status: phase Bound already set
I0211 20:53:31.076067  123370 pv_controller.go:973] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.076113  123370 pv_controller.go:974] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound (uid: 0df466bf-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.076136  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I0211 20:53:31.076164  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" with version 36284
I0211 20:53:31.076182  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I0211 20:53:31.076206  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: volume "pv-w-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound (uid: 0df466bf-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.076222  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: claim is already correctly bound
I0211 20:53:31.076232  123370 pv_controller.go:947] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.076247  123370 pv_controller.go:846] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.076282  123370 pv_controller.go:858] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.076302  123370 pv_controller.go:794] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0211 20:53:31.076309  123370 pv_controller.go:797] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I0211 20:53:31.076315  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I0211 20:53:31.076327  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound]: already bound to "pv-w-pvc-prebound"
I0211 20:53:31.076335  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound] status: set phase Bound
I0211 20:53:31.076366  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound] status: phase Bound already set
I0211 20:53:31.076375  123370 pv_controller.go:973] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound"
I0211 20:53:31.076399  123370 pv_controller.go:974] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound (uid: 0df466bf-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.076448  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I0211 20:53:31.076639  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound/binding: (1.451638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44260]
I0211 20:53:31.076932  123370 scheduler.go:571] pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-pvc-prebound is bound successfully on node node-1, 2 nodes evaluated, 1 nodes were found feasible
I0211 20:53:31.078595  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.390009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.152192  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-w-pvc-prebound: (2.011503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.154029  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-prebound: (1.209406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.155758  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.269677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.161651  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (5.321173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.187752  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (25.691186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.188114  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" deleted
I0211 20:53:31.188154  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36280
I0211 20:53:31.188188  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound (uid: 0df466bf-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.188197  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound
I0211 20:53:31.189561  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-prebound: (1.100129ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.189836  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound not found
I0211 20:53:31.189866  123370 pv_controller.go:592] volume "pv-w-pvc-prebound" is released and reclaim policy "Retain" will be executed
I0211 20:53:31.189879  123370 pv_controller.go:794] updating PersistentVolume[pv-w-pvc-prebound]: set phase Released
I0211 20:53:31.192349  123370 store.go:239] deletion of /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I0211 20:53:31.192477  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.300477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.192704  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36291
I0211 20:53:31.192747  123370 pv_controller.go:815] volume "pv-w-pvc-prebound" entered phase "Released"
I0211 20:53:31.192783  123370 pv_controller.go:1027] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I0211 20:53:31.192808  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 36291
I0211 20:53:31.192839  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Released, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound (uid: 0df466bf-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.192849  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound
I0211 20:53:31.192865  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound not found
I0211 20:53:31.192873  123370 pv_controller.go:1027] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I0211 20:53:31.193552  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (5.298858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.193803  123370 pv_controller_base.go:211] volume "pv-w-pvc-prebound" deleted
I0211 20:53:31.193843  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-prebound" was already processed
I0211 20:53:31.200242  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.316732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.200456  123370 volume_binding_test.go:193] Running test mix immediate and wait
I0211 20:53:31.202082  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.417428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.203708  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.24338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.205783  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.669261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.206044  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-w-canbind-4", version 36297
I0211 20:53:31.206080  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Pending, bound to: "", boundByController: false
I0211 20:53:31.206093  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I0211 20:53:31.206100  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I0211 20:53:31.207722  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.533478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.207791  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (1.462758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.208056  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36299
I0211 20:53:31.208087  123370 pv_controller.go:815] volume "pv-w-canbind-4" entered phase "Available"
I0211 20:53:31.208181  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-i-canbind-2", version 36298
I0211 20:53:31.208216  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Pending, bound to: "", boundByController: false
I0211 20:53:31.208229  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I0211 20:53:31.208236  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I0211 20:53:31.209776  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.527131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.210167  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4", version 36300
I0211 20:53:31.210203  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:31.210231  123370 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: no volume found
I0211 20:53:31.210272  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4] status: set phase Pending
I0211 20:53:31.210282  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (1.691668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.210293  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4] status: phase Pending already set
I0211 20:53:31.210316  123370 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002", Name:"pvc-w-canbind-4", UID:"16d06ac0-2e3f-11e9-8784-0242ac110002", APIVersion:"v1", ResourceVersion:"36300", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0211 20:53:31.210554  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36301
I0211 20:53:31.210593  123370 pv_controller.go:815] volume "pv-i-canbind-2" entered phase "Available"
I0211 20:53:31.210620  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36299
I0211 20:53:31.210650  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "", boundByController: false
I0211 20:53:31.210661  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I0211 20:53:31.210668  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I0211 20:53:31.210687  123370 pv_controller.go:797] updating PersistentVolume[pv-w-canbind-4]: phase Available already set
I0211 20:53:31.210704  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36301
I0211 20:53:31.210725  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "", boundByController: false
I0211 20:53:31.210735  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I0211 20:53:31.210741  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I0211 20:53:31.210750  123370 pv_controller.go:797] updating PersistentVolume[pv-i-canbind-2]: phase Available already set
I0211 20:53:31.212017  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.826604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.212251  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2", version 36302
I0211 20:53:31.212282  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:31.212307  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Available, bound to: "", boundByController: false
I0211 20:53:31.212347  123370 pv_controller.go:947] binding volume "pv-i-canbind-2" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2"
I0211 20:53:31.212361  123370 pv_controller.go:846] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2"
I0211 20:53:31.212387  123370 pv_controller.go:865] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2" bound to volume "pv-i-canbind-2"
I0211 20:53:31.212499  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.780361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.214249  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (1.763173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.214337  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-canbind-2: (1.50607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44594]
I0211 20:53:31.214557  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36305
I0211 20:53:31.214600  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 (uid: 16d0b628-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.214603  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36305
I0211 20:53:31.214630  123370 pv_controller.go:878] updating PersistentVolume[pv-i-canbind-2]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2"
I0211 20:53:31.214642  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I0211 20:53:31.214611  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2
I0211 20:53:31.214765  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:31.214793  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:53:31.214542  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound
I0211 20:53:31.214823  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound
I0211 20:53:31.214919  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.214959  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215055  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215080  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215095  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215107  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215151  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215172  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215223  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215244  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215259  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.215269  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:31.215328  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:31.215368  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:31.216534  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (1.639241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.217489  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36306
I0211 20:53:31.217529  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 (uid: 16d0b628-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.217487  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36306
I0211 20:53:31.217541  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2
I0211 20:53:31.217552  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:31.217562  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:53:31.217562  123370 pv_controller.go:815] volume "pv-i-canbind-2" entered phase "Bound"
I0211 20:53:31.217577  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I0211 20:53:31.217592  123370 pv_controller.go:917] volume "pv-i-canbind-2" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2"
I0211 20:53:31.217720  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.637901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:31.217728  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (1.503253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.217788  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound/status: (2.010443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44598]
E0211 20:53:31.218074  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:31.218170  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound
I0211 20:53:31.218186  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound
I0211 20:53:31.218289  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218304  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218317  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218335  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218382  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218393  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218426  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218435  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218467  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218480  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218492  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.218438  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:31.218605  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:31.218658  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:31.219496  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-canbind-2: (1.702996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.219726  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2" with version 36309
I0211 20:53:31.219766  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2]: bound to "pv-i-canbind-2"
I0211 20:53:31.219779  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2] status: set phase Bound
I0211 20:53:31.220072  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (1.168597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:31.220555  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound/status: (1.627451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
E0211 20:53:31.220784  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:31.220868  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound
I0211 20:53:31.220885  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound
I0211 20:53:31.220976  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.221059  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.221064  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.221210  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.221228  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.221692  123370 scheduler_binder.go:665] All bound volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound" match with Node "node-1"
I0211 20:53:31.221754  123370 scheduler_binder.go:710] Found matching volumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound" on node "node-1"
I0211 20:53:31.221230  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:31.221823  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-mix-bound.15826a954b544c1a: (2.420238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44604]
I0211 20:53:31.221851  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-canbind-2/status: (1.873344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44590]
I0211 20:53:31.221829  123370 scheduler_binder.go:659] PersistentVolume "pv-i-canbind-2", Node "node-2" mismatch for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound": No matching NodeSelectorTerms
I0211 20:53:31.221936  123370 scheduler_binder.go:697] No matching volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound", PVC "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" on node "node-2"
I0211 20:53:31.221980  123370 scheduler_binder.go:736] storage class "wait-ssnz" of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" does not support dynamic provisioning
I0211 20:53:31.222033  123370 scheduler_binder.go:269] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound", node "node-1"
I0211 20:53:31.222075  123370 scheduler_assume_cache.go:319] Assumed v1.PersistentVolume "pv-w-canbind-4", version 36299
I0211 20:53:31.222134  123370 scheduler_binder.go:344] BindPodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound", node "node-1"
I0211 20:53:31.222167  123370 scheduler_binder.go:412] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" bound to volume "pv-w-canbind-4"
I0211 20:53:31.222397  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2" with version 36310
I0211 20:53:31.222459  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2" entered phase "Bound"
I0211 20:53:31.222483  123370 pv_controller.go:973] volume "pv-i-canbind-2" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2"
I0211 20:53:31.222508  123370 pv_controller.go:974] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 (uid: 16d0b628-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.222520  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I0211 20:53:31.222557  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2" with version 36310
I0211 20:53:31.222580  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2]: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I0211 20:53:31.222604  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 (uid: 16d0b628-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.222623  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2]: claim is already correctly bound
I0211 20:53:31.222632  123370 pv_controller.go:947] binding volume "pv-i-canbind-2" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2"
I0211 20:53:31.222642  123370 pv_controller.go:846] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2"
I0211 20:53:31.222660  123370 pv_controller.go:858] updating PersistentVolume[pv-i-canbind-2]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2"
I0211 20:53:31.222670  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I0211 20:53:31.222688  123370 pv_controller.go:797] updating PersistentVolume[pv-i-canbind-2]: phase Bound already set
I0211 20:53:31.222706  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I0211 20:53:31.222740  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2]: already bound to "pv-i-canbind-2"
I0211 20:53:31.222755  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2] status: set phase Bound
I0211 20:53:31.222772  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2] status: phase Bound already set
I0211 20:53:31.222788  123370 pv_controller.go:973] volume "pv-i-canbind-2" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2"
I0211 20:53:31.222812  123370 pv_controller.go:974] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 (uid: 16d0b628-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.222829  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I0211 20:53:31.224352  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-4: (1.94585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.224476  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36312
I0211 20:53:31.224519  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 (uid: 16d06ac0-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.224529  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4
I0211 20:53:31.224551  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:31.224566  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:53:31.224597  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" with version 36300
I0211 20:53:31.224621  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:31.224654  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 (uid: 16d06ac0-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.224697  123370 pv_controller.go:947] binding volume "pv-w-canbind-4" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.224711  123370 scheduler_binder.go:417] updating PersistentVolume[pv-w-canbind-4]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.224721  123370 pv_controller.go:846] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.224772  123370 pv_controller.go:858] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.224789  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I0211 20:53:31.226801  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (1.805117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.227000  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36313
I0211 20:53:31.227045  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 (uid: 16d06ac0-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.227057  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4
I0211 20:53:31.227070  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:31.227094  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:53:31.227198  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36313
I0211 20:53:31.227233  123370 pv_controller.go:815] volume "pv-w-canbind-4" entered phase "Bound"
I0211 20:53:31.227242  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I0211 20:53:31.227260  123370 pv_controller.go:917] volume "pv-w-canbind-4" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.229208  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-4: (1.692317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.229546  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" with version 36314
I0211 20:53:31.229586  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: bound to "pv-w-canbind-4"
I0211 20:53:31.229596  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4] status: set phase Bound
I0211 20:53:31.231390  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-4/status: (1.591875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.231648  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" with version 36315
I0211 20:53:31.231682  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" entered phase "Bound"
I0211 20:53:31.231695  123370 pv_controller.go:973] volume "pv-w-canbind-4" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.231713  123370 pv_controller.go:974] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 (uid: 16d06ac0-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.231741  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I0211 20:53:31.231800  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" with version 36315
I0211 20:53:31.231817  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I0211 20:53:31.231827  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 (uid: 16d06ac0-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.231839  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: claim is already correctly bound
I0211 20:53:31.231846  123370 pv_controller.go:947] binding volume "pv-w-canbind-4" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.231853  123370 pv_controller.go:846] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.231865  123370 pv_controller.go:858] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.231877  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I0211 20:53:31.231881  123370 pv_controller.go:797] updating PersistentVolume[pv-w-canbind-4]: phase Bound already set
I0211 20:53:31.231886  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I0211 20:53:31.231898  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4]: already bound to "pv-w-canbind-4"
I0211 20:53:31.231905  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4] status: set phase Bound
I0211 20:53:31.231915  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4] status: phase Bound already set
I0211 20:53:31.231927  123370 pv_controller.go:973] volume "pv-w-canbind-4" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4"
I0211 20:53:31.231939  123370 pv_controller.go:974] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 (uid: 16d06ac0-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:31.231962  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I0211 20:53:31.316864  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (1.897155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.416847  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (1.846217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.495490  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:31.495512  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:31.495501  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:31.495501  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:31.497464  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:31.497933  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:31.517107  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (2.039185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.616925  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (1.896303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.716778  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (1.781165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.817038  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (1.943748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:31.862310  123370 cache.go:530] Couldn't expire cache for pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound. Binding is still in progress.
I0211 20:53:31.916994  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (2.049399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.017138  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (2.12749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.116845  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (1.837803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.217062  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (2.041208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.224985  123370 scheduler_binder.go:559] All PVCs for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound" are bound
I0211 20:53:32.225036  123370 factory.go:733] Attempting to bind pod-mix-bound to node-1
I0211 20:53:32.227126  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound/binding: (1.877826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.227447  123370 scheduler.go:571] pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-mix-bound is bound successfully on node node-1, 2 nodes evaluated, 1 nodes were found feasible
I0211 20:53:32.229331  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.555567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.317118  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-mix-bound: (2.098681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.318956  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-4: (1.281194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.320518  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-canbind-2: (1.133934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.322098  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-w-canbind-4: (1.134919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.323683  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-i-canbind-2: (1.09384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.328850  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (4.764774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.332875  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2" deleted
I0211 20:53:32.332933  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36306
I0211 20:53:32.332984  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 (uid: 16d0b628-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:32.333005  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2
I0211 20:53:32.334006  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-canbind-2: (813.929µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:32.334234  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2 not found
I0211 20:53:32.334263  123370 pv_controller.go:592] volume "pv-i-canbind-2" is released and reclaim policy "Retain" will be executed
I0211 20:53:32.334277  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind-2]: set phase Released
I0211 20:53:32.335020  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (5.86529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.335306  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" deleted
I0211 20:53:32.336439  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (1.833668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:32.336715  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36331
I0211 20:53:32.336793  123370 pv_controller.go:815] volume "pv-i-canbind-2" entered phase "Released"
I0211 20:53:32.336819  123370 pv_controller.go:1027] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I0211 20:53:32.336847  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36313
I0211 20:53:32.336877  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 (uid: 16d06ac0-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:32.336885  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4
I0211 20:53:32.338118  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-w-canbind-4: (1.034708ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:32.338321  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 not found
I0211 20:53:32.338347  123370 pv_controller.go:592] volume "pv-w-canbind-4" is released and reclaim policy "Retain" will be executed
I0211 20:53:32.338358  123370 pv_controller.go:794] updating PersistentVolume[pv-w-canbind-4]: set phase Released
I0211 20:53:32.340206  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (1.60404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:32.340634  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36333
I0211 20:53:32.340663  123370 pv_controller.go:815] volume "pv-w-canbind-4" entered phase "Released"
I0211 20:53:32.340671  123370 pv_controller.go:1027] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I0211 20:53:32.340689  123370 pv_controller_base.go:211] volume "pv-i-canbind-2" deleted
I0211 20:53:32.340711  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36333
I0211 20:53:32.340731  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Released, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 (uid: 16d06ac0-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:32.340737  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4
I0211 20:53:32.340761  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4 not found
I0211 20:53:32.340765  123370 pv_controller.go:1027] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I0211 20:53:32.340779  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind-2" was already processed
I0211 20:53:32.341248  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (5.882511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.341471  123370 pv_controller_base.go:211] volume "pv-w-canbind-4" deleted
I0211 20:53:32.341513  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-w-canbind-4" was already processed
I0211 20:53:32.347004  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (5.313849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.347193  123370 volume_binding_test.go:193] Running test immediate can bind
I0211 20:53:32.348655  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.240857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.350661  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.644244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.352293  123370 wrap.go:47] POST /api/v1/persistentvolumes: (1.255584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.352607  123370 pv_controller_base.go:491] storeObjectUpdate: adding volume "pv-i-canbind", version 36339
I0211 20:53:32.352651  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind]: phase: Pending, bound to: "", boundByController: false
I0211 20:53:32.352663  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I0211 20:53:32.352668  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind]: set phase Available
I0211 20:53:32.354340  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.49629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.354623  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind", version 36340
I0211 20:53:32.354659  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:32.354685  123370 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: no volume found
I0211 20:53:32.354722  123370 pv_controller.go:1336] provisionClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: started
I0211 20:53:32.354739  123370 pv_controller.go:1587] scheduleOperation[provision-volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind[177f111a-2e3f-11e9-8784-0242ac110002]]
I0211 20:53:32.354801  123370 pv_controller.go:1352] provisionClaimOperation [volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind] started, class: "immediate-x9bd"
I0211 20:53:32.354855  123370 pv_controller.go:1357] error finding provisioning plugin for claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind: no volume plugin matched
E0211 20:53:32.354917  123370 goroutinemap.go:150] Operation for "provision-volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind[177f111a-2e3f-11e9-8784-0242ac110002]" failed. No retries permitted until 2019-02-11 20:53:32.854887565 +0000 UTC m=+246.782823200 (durationBeforeRetry 500ms). Error: "no volume plugin matched"
I0211 20:53:32.354906  123370 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002", Name:"pvc-i-canbind", UID:"177f111a-2e3f-11e9-8784-0242ac110002", APIVersion:"v1", ResourceVersion:"36340", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I0211 20:53:32.355153  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.221474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:32.355388  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36341
I0211 20:53:32.355441  123370 pv_controller.go:815] volume "pv-i-canbind" entered phase "Available"
I0211 20:53:32.355486  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36341
I0211 20:53:32.355503  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I0211 20:53:32.355510  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I0211 20:53:32.355515  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind]: set phase Available
I0211 20:53:32.355519  123370 pv_controller.go:797] updating PersistentVolume[pv-i-canbind]: phase Available already set
I0211 20:53:32.356637  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind
I0211 20:53:32.356664  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind
I0211 20:53:32.356640  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (1.799836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
I0211 20:53:32.356828  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.423747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44620]
I0211 20:53:32.356847  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.356836  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.356973  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.356999  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.357036  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.357070  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:32.357134  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:32.357182  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:32.359217  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.433971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:32.359308  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.836832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:32.359481  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind/status: (2.010978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44602]
E0211 20:53:32.359721  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:32.359798  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind
I0211 20:53:32.359805  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind
I0211 20:53:32.359870  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.359886  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.359975  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.359986  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.360001  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:32.360004  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:32.360063  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:32.360137  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:32.361579  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.204031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:32.362254  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind/status: (1.847214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
E0211 20:53:32.362538  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:32.363207  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-canbind.15826a958f62f157: (2.311724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44624]
I0211 20:53:32.459382  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.734327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:32.495683  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:32.495701  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:32.495704  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:32.495725  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:32.497684  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:32.498104  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:32.559600  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.882999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:32.659574  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.900654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:32.760000  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.245446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:32.859555  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.888103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:32.959627  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.971288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.059983  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.294859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.159099  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.508274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.259590  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.889089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.359536  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.832183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.459552  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.859548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.495913  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:33.495956  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:33.495911  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:33.495933  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:33.497883  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:33.498382  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:33.559335  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.747273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.659670  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.951501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.759900  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.263882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.859627  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.981447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:33.959540  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.864338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.059607  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.978526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.159609  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.94055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.259558  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.902146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.359382  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.707335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.459558  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.909041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.496205  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:34.496257  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:34.496257  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:34.496260  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:34.498049  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:34.498565  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:34.559730  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.981366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.659751  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.078001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.759543  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.826019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.859824  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.16381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:34.959731  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.048122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.059643  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.982067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.159583  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.927356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.259927  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.23379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.359640  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.941108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.459837  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.13571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.496488  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:35.496487  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:35.496495  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:35.496529  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:35.498187  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:35.498806  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:35.560336  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.606729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.659643  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.010454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.759617  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.885329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.859920  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.229891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:35.959532  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.841443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.059666  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.057476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.159752  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.039037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.259684  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.938464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.359592  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.881657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.459721  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.012674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.496670  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:36.496673  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:36.496670  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:36.496670  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:36.498365  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:36.499011  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:36.559611  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.90927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.615995  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.818252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.617774  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.297082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.619465  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.220295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.659722  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.018777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.759955  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.231412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.859959  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.213239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:36.959847  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.174832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.059727  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.070063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.159631  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.865517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.259724  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.000056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.359352  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.674041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.459737  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.044797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.496913  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:37.496913  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:37.496913  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:37.496922  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:37.498505  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:37.499165  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:37.559624  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.903659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.659466  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.801436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.759809  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.13682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.859369  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.723483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:37.959698  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.048502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.059761  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.038722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.159445  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.688013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.249125  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.504136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:38.250963  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.299211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:38.252654  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.164151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58168]
I0211 20:53:38.259218  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.625052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.359654  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.974294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.459917  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.178601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.497313  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:38.497350  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:38.497366  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:38.497330  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:38.498766  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:38.499377  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:38.559737  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.062187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.659681  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.99014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.759779  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.067453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.859312  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.686941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:38.959717  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.001009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.059690  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.929309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.159789  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.049263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.259779  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.102672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.359472  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.87574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.459715  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.047069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.497537  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:39.497584  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:39.497537  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:39.497537  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:39.498927  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:39.499654  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:39.559672  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.979005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.659456  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.79882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.759902  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.211755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.859811  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.11334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:39.959589  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.827474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.059579  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.917019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.159623  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.909903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.260309  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.52514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.359879  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.034865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.459795  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.079657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.497818  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:40.497854  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:40.497862  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:40.497826  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:40.499115  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:40.499904  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:40.559511  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.834554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.659582  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.951369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.759923  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.236776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.859866  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.019333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:40.959658  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.957641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.059590  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.911371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.159754  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.033472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.259581  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.904461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.359600  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.889243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.459733  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.031632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.498027  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:41.498052  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:41.498027  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:41.498027  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:41.499301  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:41.500084  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:41.559705  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.985899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.659940  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.212491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.759842  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.141916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.859442  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.774608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:41.959753  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.090955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.059643  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.983882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.159865  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.163509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.259591  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.88265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.359860  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.172263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.459703  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.012289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.498241  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:42.498285  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:42.498241  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:42.498332  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:42.499514  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:42.500234  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:42.559862  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.109278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.659716  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.996968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.759538  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.879899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.859556  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.906115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:42.960006  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.155044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.059862  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.133768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.159540  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.896449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.260037  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.32891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.359499  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.863426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.459661  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.00254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.498515  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:43.498515  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:43.498515  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:43.498606  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:43.499687  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:43.500564  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:43.559545  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.849237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.659403  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.688041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.760096  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.345768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.859938  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.159954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:43.959624  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.974475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.059581  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.950696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.159472  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.770105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.259708  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.956539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.359830  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.127791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.459454  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.784981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.498760  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:44.498760  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:44.498782  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:44.498851  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:44.499870  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:44.500752  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:44.559811  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.088008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.659712  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.015184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.759448  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.799837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.859570  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.937695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:44.959859  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.130356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.059806  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.155963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.159471  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.81241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.264676  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.883624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.359902  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.260259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.459497  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.827728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.499000  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:45.499039  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:45.499068  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:45.499114  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:45.500084  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:45.500937  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:45.559496  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.809683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.659775  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.057432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.759565  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.750542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.859780  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.141012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:45.959922  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.210487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:46.059748  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.026174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:46.065258  123370 pv_controller_base.go:408] resyncing PV controller
I0211 20:53:46.065353  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36341
I0211 20:53:46.065389  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I0211 20:53:46.065399  123370 pv_controller.go:511] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I0211 20:53:46.065437  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind]: set phase Available
I0211 20:53:46.065460  123370 pv_controller.go:797] updating PersistentVolume[pv-i-canbind]: phase Available already set
I0211 20:53:46.065458  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" with version 36340
I0211 20:53:46.065481  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:46.065519  123370 pv_controller.go:352] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Available, bound to: "", boundByController: false
I0211 20:53:46.065532  123370 pv_controller.go:947] binding volume "pv-i-canbind" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind"
I0211 20:53:46.065539  123370 pv_controller.go:846] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind"
I0211 20:53:46.065567  123370 pv_controller.go:865] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" bound to volume "pv-i-canbind"
I0211 20:53:46.068082  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-canbind: (2.231407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:46.068140  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36433
I0211 20:53:46.068170  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind (uid: 177f111a-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:46.068196  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind
I0211 20:53:46.068215  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:46.068253  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:53:46.068144  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind
I0211 20:53:46.068285  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind
I0211 20:53:46.068358  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36433
I0211 20:53:46.068394  123370 pv_controller.go:878] updating PersistentVolume[pv-i-canbind]: bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind"
I0211 20:53:46.068431  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind]: set phase Bound
I0211 20:53:46.068517  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:46.068626  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:46.068652  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:46.068517  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:46.068822  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:46.068846  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:46.068911  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:46.068966  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:46.070768  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.568326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:46.071315  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.3112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:46.071492  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36434
I0211 20:53:46.071529  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind (uid: 177f111a-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:46.071546  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind
I0211 20:53:46.071559  123370 pv_controller.go:572] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:46.071572  123370 pv_controller.go:620] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I0211 20:53:46.071580  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36434
I0211 20:53:46.071608  123370 pv_controller.go:815] volume "pv-i-canbind" entered phase "Bound"
I0211 20:53:46.071622  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: binding to "pv-i-canbind"
I0211 20:53:46.071651  123370 pv_controller.go:917] volume "pv-i-canbind" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind"
I0211 20:53:46.071693  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind/status: (2.182359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44792]
E0211 20:53:46.072042  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:46.072381  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-canbind.15826a958f62f157: (2.593347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.073648  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-canbind: (1.647499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44622]
I0211 20:53:46.074070  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" with version 36437
I0211 20:53:46.074103  123370 pv_controller.go:928] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: bound to "pv-i-canbind"
I0211 20:53:46.074113  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind] status: set phase Bound
I0211 20:53:46.075809  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-canbind/status: (1.471914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.076058  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" with version 36438
I0211 20:53:46.076088  123370 pv_controller.go:759] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" entered phase "Bound"
I0211 20:53:46.076102  123370 pv_controller.go:973] volume "pv-i-canbind" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind"
I0211 20:53:46.076124  123370 pv_controller.go:974] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind (uid: 177f111a-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:46.076142  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I0211 20:53:46.076177  123370 pv_controller_base.go:519] storeObjectUpdate updating claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" with version 36438
I0211 20:53:46.076208  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I0211 20:53:46.076226  123370 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind (uid: 177f111a-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:46.076236  123370 pv_controller.go:483] synchronizing bound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: claim is already correctly bound
I0211 20:53:46.076256  123370 pv_controller.go:947] binding volume "pv-i-canbind" to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind"
I0211 20:53:46.076273  123370 pv_controller.go:846] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind"
I0211 20:53:46.076294  123370 pv_controller.go:858] updating PersistentVolume[pv-i-canbind]: already bound to "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind"
I0211 20:53:46.076305  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind]: set phase Bound
I0211 20:53:46.076312  123370 pv_controller.go:797] updating PersistentVolume[pv-i-canbind]: phase Bound already set
I0211 20:53:46.076322  123370 pv_controller.go:885] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: binding to "pv-i-canbind"
I0211 20:53:46.076338  123370 pv_controller.go:932] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind]: already bound to "pv-i-canbind"
I0211 20:53:46.076353  123370 pv_controller.go:700] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind] status: set phase Bound
I0211 20:53:46.076369  123370 pv_controller.go:745] updating PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind] status: phase Bound already set
I0211 20:53:46.076384  123370 pv_controller.go:973] volume "pv-i-canbind" bound to claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind"
I0211 20:53:46.076401  123370 pv_controller.go:974] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind (uid: 177f111a-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:46.076450  123370 pv_controller.go:975] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I0211 20:53:46.159658  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.992768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.259621  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.796399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.359552  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.899417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.459803  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.149854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.499222  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:46.499223  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:46.499298  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:46.499319  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:46.500275  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:46.501123  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:46.559515  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.852527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.622120  123370 wrap.go:47] GET /api/v1/namespaces/default: (1.806724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.623942  123370 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.281074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.625617  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.21647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.659893  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.200872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.759896  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.169624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.859550  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.907535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:46.959984  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.223999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.060064  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.167982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.159374  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.641072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.259571  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.826346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.359252  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (1.525974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.460446  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.746385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.499465  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:47.499509  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:47.499517  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:47.499544  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:47.500457  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:47.501350  123370 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 20:53:47.559789  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.102571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.659854  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.058836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.759819  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.105794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.859781  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.077043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.865520  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind
I0211 20:53:47.865554  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind
I0211 20:53:47.865839  123370 scheduler_binder.go:659] PersistentVolume "pv-i-canbind", Node "node-2" mismatch for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind": No matching NodeSelectorTerms
I0211 20:53:47.865903  123370 scheduler_binder.go:665] All bound volumes for Pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind" match with Node "node-1"
I0211 20:53:47.865973  123370 scheduler_binder.go:269] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind", node "node-1"
I0211 20:53:47.865998  123370 scheduler_binder.go:279] AssumePodVolumes for pod "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind", node "node-1": all PVCs bound and nothing to do
I0211 20:53:47.866095  123370 factory.go:733] Attempting to bind pod-i-canbind to node-1
I0211 20:53:47.868530  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind/binding: (2.029895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.868792  123370 scheduler.go:571] pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-canbind is bound successfully on node node-1, 2 nodes evaluated, 1 nodes were found feasible
I0211 20:53:47.870828  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.725446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.959942  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-canbind: (2.174534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.961920  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-canbind: (1.446733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.963718  123370 wrap.go:47] GET /api/v1/persistentvolumes/pv-i-canbind: (1.310131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.969558  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (5.367802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.973701  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (3.719165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.973999  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" deleted
I0211 20:53:47.974046  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36434
I0211 20:53:47.974081  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind (uid: 177f111a-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:47.974101  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind
I0211 20:53:47.975385  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-canbind: (1.091747ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:47.975637  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind not found
I0211 20:53:47.975665  123370 pv_controller.go:592] volume "pv-i-canbind" is released and reclaim policy "Retain" will be executed
I0211 20:53:47.975679  123370 pv_controller.go:794] updating PersistentVolume[pv-i-canbind]: set phase Released
I0211 20:53:47.977809  123370 store.go:239] deletion of /cefbd804-3d51-4834-a729-5f6c5123655d/persistentvolumes/pv-i-canbind failed because of a conflict, going to retry
I0211 20:53:47.977939  123370 wrap.go:47] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (1.928489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:47.978176  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36467
I0211 20:53:47.978210  123370 pv_controller.go:815] volume "pv-i-canbind" entered phase "Released"
I0211 20:53:47.978223  123370 pv_controller.go:1027] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I0211 20:53:47.978250  123370 pv_controller_base.go:519] storeObjectUpdate updating volume "pv-i-canbind" with version 36467
I0211 20:53:47.978268  123370 pv_controller.go:506] synchronizing PersistentVolume[pv-i-canbind]: phase: Released, bound to: "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind (uid: 177f111a-2e3f-11e9-8784-0242ac110002)", boundByController: true
I0211 20:53:47.978276  123370 pv_controller.go:531] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind
I0211 20:53:47.978287  123370 pv_controller.go:564] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind not found
I0211 20:53:47.978295  123370 pv_controller.go:1027] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I0211 20:53:47.978968  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (4.788129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.979064  123370 pv_controller_base.go:211] volume "pv-i-canbind" deleted
I0211 20:53:47.979108  123370 pv_controller_base.go:385] deletion of claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-canbind" was already processed
I0211 20:53:47.985112  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (5.782829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.985284  123370 volume_binding_test.go:193] Running test immediate cannot bind
I0211 20:53:47.986704  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.216889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.988578  123370 wrap.go:47] POST /apis/storage.k8s.io/v1/storageclasses: (1.405925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.990571  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.550199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.990624  123370 pv_controller_base.go:491] storeObjectUpdate: adding claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind", version 36473
I0211 20:53:47.990661  123370 pv_controller.go:241] synchronizing PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0211 20:53:47.990687  123370 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind]: no volume found
I0211 20:53:47.990705  123370 pv_controller.go:1336] provisionClaim[volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind]: started
I0211 20:53:47.990715  123370 pv_controller.go:1587] scheduleOperation[provision-volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind[20d0f3ab-2e3f-11e9-8784-0242ac110002]]
I0211 20:53:47.990771  123370 pv_controller.go:1352] provisionClaimOperation [volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind] started, class: "immediate-jnlp"
I0211 20:53:47.990830  123370 pv_controller.go:1357] error finding provisioning plugin for claim volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind: no volume plugin matched
I0211 20:53:47.990896  123370 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002", Name:"pvc-i-cannotbind", UID:"20d0f3ab-2e3f-11e9-8784-0242ac110002", APIVersion:"v1", ResourceVersion:"36473", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
E0211 20:53:47.990876  123370 goroutinemap.go:150] Operation for "provision-volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind[20d0f3ab-2e3f-11e9-8784-0242ac110002]" failed. No retries permitted until 2019-02-11 20:53:48.490852266 +0000 UTC m=+262.418787889 (durationBeforeRetry 500ms). Error: "no volume plugin matched"
I0211 20:53:47.992852  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (1.768615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
I0211 20:53:47.992861  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.648595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:47.993054  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind
I0211 20:53:47.993075  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind
I0211 20:53:47.993164  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.993194  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.993280  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.993287  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.993294  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.993306  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:47.993367  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:47.993449  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:47.994868  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-cannotbind: (1.239044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:47.995613  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.581647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44824]
I0211 20:53:47.996065  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-cannotbind/status: (2.36389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44794]
E0211 20:53:47.996448  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:47.996596  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind
I0211 20:53:47.996632  123370 scheduler.go:453] Attempting to schedule pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind
I0211 20:53:47.996731  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.996787  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.996815  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.996834  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.996912  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
I0211 20:53:47.996968  123370 predicates.go:439] PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind is not bound, assuming PVC matches predicate when counting limits
E0211 20:53:47.997035  123370 factory.go:660] Error scheduling volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0211 20:53:47.997084  123370 factory.go:742] Updating pod condition for volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I0211 20:53:47.999229  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-cannotbind: (1.880565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.007790  123370 wrap.go:47] PUT /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-cannotbind/status: (10.408894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44824]
E0211 20:53:48.008109  123370 scheduler.go:480] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0211 20:53:48.011004  123370 wrap.go:47] PATCH /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events/pod-i-cannotbind.15826a9933616f7a: (3.082861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.095654  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods/pod-i-cannotbind: (1.957392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.097483  123370 wrap.go:47] GET /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims/pvc-i-cannotbind: (1.311266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.102776  123370 scheduling_queue.go:868] About to try and schedule pod volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind
I0211 20:53:48.102827  123370 scheduler.go:449] Skip schedule deleting pod: volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-i-cannotbind
I0211 20:53:48.104712  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (6.63997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.105934  123370 wrap.go:47] POST /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/events: (1.681864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44824]
I0211 20:53:48.111807  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (6.650703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.111985  123370 pv_controller_base.go:251] claim "volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-cannotbind" deleted
I0211 20:53:48.113513  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (1.323626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.120247  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.370398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.121709  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pods: (1.081377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.123213  123370 wrap.go:47] DELETE /api/v1/namespaces/volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/persistentvolumeclaims: (1.03811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.124839  123370 wrap.go:47] DELETE /api/v1/persistentvolumes: (1.100868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.126191  123370 wrap.go:47] DELETE /apis/storage.k8s.io/v1/storageclasses: (953.005µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
E0211 20:53:48.126529  123370 scheduling_queue.go:871] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0211 20:53:48.126684  123370 pv_controller_base.go:287] Shutting down persistent volume controller
I0211 20:53:48.126704  123370 pv_controller_base.go:398] claim worker queue shutting down
I0211 20:53:48.126720  123370 pv_controller_base.go:341] volume worker queue shutting down
I0211 20:53:48.126865  123370 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=30522&timeout=6m18s&timeoutSeconds=378&watch=true: (1m18.27091891s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41330]
I0211 20:53:48.126935  123370 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=30524&timeout=9m25s&timeoutSeconds=565&watch=true: (1m18.269391437s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41686]
I0211 20:53:48.126869  123370 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=30518&timeout=5m46s&timeoutSeconds=346&watch=true: (1m17.159272124s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41848]
I0211 20:53:48.127022  123370 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=30520&timeout=7m11s&timeoutSeconds=431&watch=true: (1m18.270747238s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41680]
I0211 20:53:48.126870  123370 wrap.go:47] GET /api/v1/nodes?resourceVersion=30518&timeout=7m48s&timeoutSeconds=468&watch=true: (1m17.159878969s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41842]
I0211 20:53:48.126958  123370 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=30518&timeout=7m15s&timeoutSeconds=435&watch=true: (1m17.159851146s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41840]
I0211 20:53:48.127112  123370 wrap.go:47] GET /api/v1/pods?resourceVersion=30518&timeout=7m56s&timeoutSeconds=476&watch=true: (1m18.268747159s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41688]
I0211 20:53:48.126879  123370 wrap.go:47] GET /api/v1/nodes?resourceVersion=30518&timeout=7m8s&timeoutSeconds=428&watch=true: (1m18.265129541s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41696]
I0211 20:53:48.127028  123370 wrap.go:47] GET /api/v1/pods?resourceVersion=30518&timeout=5m25s&timeoutSeconds=325&watch=true: (1m17.15992924s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0211 20:53:48.127209  123370 wrap.go:47] GET /api/v1/services?resourceVersion=30531&timeout=9m10s&timeoutSeconds=550&watch=true: (1m18.26822021s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41694]
I0211 20:53:48.126961  123370 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=30518&timeout=5m26s&timeoutSeconds=326&watch=true: (1m18.270620953s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41682]
I0211 20:53:48.127305  123370 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=30518&timeout=5m44s&timeoutSeconds=344&watch=true: (1m18.268736253s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41692]
I0211 20:53:48.127209  123370 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=30524&timeout=8m46s&timeoutSeconds=526&watch=true: (1m18.268723496s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41690]
I0211 20:53:48.127029  123370 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=30522&timeout=8m28s&timeoutSeconds=508&watch=true: (1m17.159514502s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41850]
I0211 20:53:48.127476  123370 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=30518&timeout=7m48s&timeoutSeconds=468&watch=true: (1m18.271640111s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41678]
I0211 20:53:48.133154  123370 wrap.go:47] DELETE /api/v1/nodes: (6.072693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.133363  123370 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0211 20:53:48.134821  123370 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.165783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.136794  123370 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.559884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44600]
I0211 20:53:48.137348  123370 feature_gate.go:226] feature gates: &{map[PodPriority:true TaintNodesByCondition:true PersistentLocalVolumes:true]}
volume_binding_test.go:1129: PVC volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pvc-i-prebound phase not Bound, got Pending
				from junit_642613dbe8fbf016c1770a7007e34bb12666c617_20190211-204510.xml

Find volume-scheduling-f23e1492-2e3e-11e9-8784-0242ac110002/pod-w-canbind mentions in log files | View test history on testgrid


Show 622 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 305 lines ...
W0211 20:39:26.350] I0211 20:39:26.349482   54274 serving.go:311] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0211 20:39:26.350] I0211 20:39:26.349560   54274 server.go:561] external host was not specified, using 172.17.0.2
W0211 20:39:26.350] W0211 20:39:26.349573   54274 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0211 20:39:26.351] I0211 20:39:26.349820   54274 server.go:146] Version: v1.14.0-alpha.2.537+3d89da41e8b456
W0211 20:39:26.660] I0211 20:39:26.659792   54274 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 20:39:26.661] I0211 20:39:26.659820   54274 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 20:39:26.661] E0211 20:39:26.660365   54274 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:26.661] E0211 20:39:26.660399   54274 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:26.661] E0211 20:39:26.660536   54274 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:26.662] E0211 20:39:26.660642   54274 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:26.662] E0211 20:39:26.660695   54274 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:26.662] E0211 20:39:26.660731   54274 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:26.663] I0211 20:39:26.660761   54274 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 20:39:26.663] I0211 20:39:26.660769   54274 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 20:39:26.663] I0211 20:39:26.662887   54274 clientconn.go:551] parsed scheme: ""
W0211 20:39:26.663] I0211 20:39:26.662934   54274 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 20:39:26.664] I0211 20:39:26.663025   54274 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 20:39:26.664] I0211 20:39:26.663137   54274 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 335 lines ...
W0211 20:39:27.027] W0211 20:39:27.027394   54274 genericapiserver.go:330] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0211 20:39:27.659] I0211 20:39:27.658590   54274 clientconn.go:551] parsed scheme: ""
W0211 20:39:27.659] I0211 20:39:27.658643   54274 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 20:39:27.659] I0211 20:39:27.658704   54274 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 20:39:27.660] I0211 20:39:27.658784   54274 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 20:39:27.660] I0211 20:39:27.659403   54274 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 20:39:27.702] E0211 20:39:27.701956   54274 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:27.702] E0211 20:39:27.702025   54274 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:27.703] E0211 20:39:27.702099   54274 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:27.703] E0211 20:39:27.702169   54274 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:27.703] E0211 20:39:27.702228   54274 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:27.703] E0211 20:39:27.702270   54274 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 20:39:27.703] I0211 20:39:27.702314   54274 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 20:39:27.704] I0211 20:39:27.702333   54274 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 20:39:27.704] I0211 20:39:27.704587   54274 clientconn.go:551] parsed scheme: ""
W0211 20:39:27.705] I0211 20:39:27.704618   54274 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 20:39:27.705] I0211 20:39:27.704685   54274 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 20:39:27.705] I0211 20:39:27.704746   54274 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 180 lines ...
W0211 20:40:04.054] I0211 20:40:04.052331   57649 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0211 20:40:04.054] I0211 20:40:04.052364   57649 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0211 20:40:04.054] I0211 20:40:04.052388   57649 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
W0211 20:40:04.054] I0211 20:40:04.052438   57649 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
W0211 20:40:04.055] I0211 20:40:04.052476   57649 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
W0211 20:40:04.055] I0211 20:40:04.052529   57649 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W0211 20:40:04.055] E0211 20:40:04.052560   57649 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 20:40:04.055] I0211 20:40:04.052593   57649 controllermanager.go:493] Started "resourcequota"
W0211 20:40:04.055] I0211 20:40:04.052671   57649 resource_quota_controller.go:276] Starting resource quota controller
W0211 20:40:04.055] I0211 20:40:04.052688   57649 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0211 20:40:04.056] I0211 20:40:04.052711   57649 resource_quota_monitor.go:301] QuotaMonitor running
W0211 20:40:04.065] I0211 20:40:04.065110   57649 controllermanager.go:493] Started "namespace"
W0211 20:40:04.066] I0211 20:40:04.065166   57649 namespace_controller.go:186] Starting namespace controller
... skipping 15 lines ...
W0211 20:40:04.068] I0211 20:40:04.067017   57649 controller_utils.go:1021] Waiting for caches to sync for GC controller
W0211 20:40:04.069] I0211 20:40:04.066945   57649 pvc_protection_controller.go:99] Starting PVC protection controller
W0211 20:40:04.069] I0211 20:40:04.067112   57649 controller_utils.go:1021] Waiting for caches to sync for PVC protection controller
W0211 20:40:04.069] I0211 20:40:04.067126   57649 controllermanager.go:493] Started "daemonset"
W0211 20:40:04.069] I0211 20:40:04.067141   57649 daemon_controller.go:267] Starting daemon sets controller
W0211 20:40:04.069] I0211 20:40:04.067398   57649 controller_utils.go:1021] Waiting for caches to sync for daemon sets controller
W0211 20:40:04.069] E0211 20:40:04.067852   57649 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0211 20:40:04.070] W0211 20:40:04.067872   57649 controllermanager.go:485] Skipping "service"
W0211 20:40:04.070] I0211 20:40:04.068295   57649 controllermanager.go:493] Started "csrapproving"
W0211 20:40:04.070] I0211 20:40:04.068804   57649 controllermanager.go:493] Started "deployment"
W0211 20:40:04.070] I0211 20:40:04.068941   57649 certificate_controller.go:113] Starting certificate controller
W0211 20:40:04.070] I0211 20:40:04.068969   57649 controller_utils.go:1021] Waiting for caches to sync for certificate controller
W0211 20:40:04.070] I0211 20:40:04.068989   57649 deployment_controller.go:152] Starting deployment controller
W0211 20:40:04.070] I0211 20:40:04.069004   57649 controller_utils.go:1021] Waiting for caches to sync for deployment controller
W0211 20:40:04.071] E0211 20:40:04.069343   57649 prometheus.go:138] failed to register depth metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_depth", help: "(Deprecated) Current depth of workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_depth" is not a valid metric name
W0211 20:40:04.071] E0211 20:40:04.069375   57649 prometheus.go:150] failed to register adds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_adds", help: "(Deprecated) Total number of adds handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_adds" is not a valid metric name
W0211 20:40:04.071] E0211 20:40:04.069461   57649 prometheus.go:162] failed to register latency metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_queue_latency", help: "(Deprecated) How long an item stays in workqueuedisruption-recheck before being requested.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_queue_latency" is not a valid metric name
W0211 20:40:04.071] E0211 20:40:04.069521   57649 prometheus.go:174] failed to register work_duration metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_work_duration", help: "(Deprecated) How long processing an item from workqueuedisruption-recheck takes.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_work_duration" is not a valid metric name
W0211 20:40:04.072] E0211 20:40:04.069552   57649 prometheus.go:189] failed to register unfinished_work_seconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_unfinished_work_seconds", help: "(Deprecated) How many seconds of work disruption-recheck has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_unfinished_work_seconds" is not a valid metric name
W0211 20:40:04.072] E0211 20:40:04.069570   57649 prometheus.go:202] failed to register longest_running_processor_microseconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for disruption-recheck been running.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_longest_running_processor_microseconds" is not a valid metric name
W0211 20:40:04.073] E0211 20:40:04.069610   57649 prometheus.go:214] failed to register retries metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_retries", help: "(Deprecated) Total number of retries handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_retries" is not a valid metric name
W0211 20:40:04.073] I0211 20:40:04.069690   57649 controllermanager.go:493] Started "disruption"
W0211 20:40:04.073] I0211 20:40:04.070020   57649 controllermanager.go:493] Started "cronjob"
W0211 20:40:04.073] W0211 20:40:04.070046   57649 controllermanager.go:472] "tokencleaner" is disabled
W0211 20:40:04.073] I0211 20:40:04.071217   57649 controllermanager.go:493] Started "persistentvolume-binder"
W0211 20:40:04.074] W0211 20:40:04.071745   57649 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0211 20:40:04.074] I0211 20:40:04.072557   57649 controllermanager.go:493] Started "attachdetach"
W0211 20:40:04.074] I0211 20:40:04.073086   57649 controllermanager.go:493] Started "endpoint"
W0211 20:40:04.074] I0211 20:40:04.073834   57649 controllermanager.go:493] Started "replicationcontroller"
W0211 20:40:04.074] I0211 20:40:04.074269   57649 controllermanager.go:493] Started "ttl"
W0211 20:40:04.074] I0211 20:40:04.074533   57649 node_lifecycle_controller.go:77] Sending events to api server
W0211 20:40:04.075] E0211 20:40:04.074600   57649 core.go:162] failed to start cloud node lifecycle controller: no cloud provider provided
W0211 20:40:04.075] W0211 20:40:04.074608   57649 controllermanager.go:485] Skipping "cloud-node-lifecycle"
W0211 20:40:04.075] I0211 20:40:04.075149   57649 controllermanager.go:493] Started "persistentvolume-expander"
W0211 20:40:04.075] I0211 20:40:04.075620   57649 disruption.go:286] Starting disruption controller
W0211 20:40:04.076] I0211 20:40:04.075642   57649 controller_utils.go:1021] Waiting for caches to sync for disruption controller
W0211 20:40:04.076] I0211 20:40:04.075663   57649 cronjob_controller.go:92] Starting CronJob Manager
W0211 20:40:04.076] I0211 20:40:04.075830   57649 pv_controller_base.go:271] Starting persistent volume controller
... skipping 42 lines ...
W0211 20:40:04.370] I0211 20:40:04.276215   57649 controller_utils.go:1028] Caches are synced for ReplicationController controller
W0211 20:40:04.370] I0211 20:40:04.276383   57649 controller_utils.go:1028] Caches are synced for endpoint controller
W0211 20:40:04.370] I0211 20:40:04.284783   57649 controller_utils.go:1028] Caches are synced for stateful set controller
W0211 20:40:04.370] I0211 20:40:04.285540   57649 controller_utils.go:1028] Caches are synced for service account controller
W0211 20:40:04.370] I0211 20:40:04.287001   57649 controller_utils.go:1028] Caches are synced for ReplicaSet controller
W0211 20:40:04.371] I0211 20:40:04.288164   54274 controller.go:606] quota admission added evaluator for: serviceaccounts
W0211 20:40:04.441] W0211 20:40:04.440627   57649 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0211 20:40:04.468] I0211 20:40:04.467682   57649 controller_utils.go:1028] Caches are synced for daemon sets controller
W0211 20:40:04.476] I0211 20:40:04.476285   57649 controller_utils.go:1028] Caches are synced for TTL controller
W0211 20:40:04.498] I0211 20:40:04.498224   57649 controller_utils.go:1028] Caches are synced for taint controller
W0211 20:40:04.499] I0211 20:40:04.498385   57649 node_lifecycle_controller.go:1113] Initializing eviction metric for zone: 
W0211 20:40:04.499] I0211 20:40:04.498486   57649 node_lifecycle_controller.go:963] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0211 20:40:04.499] I0211 20:40:04.498387   57649 taint_manager.go:198] Starting NoExecuteTaintManager
W0211 20:40:04.500] I0211 20:40:04.498518   57649 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"35c7022c-2e3d-11e9-bc7d-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0211 20:40:04.584] I0211 20:40:04.584015   57649 controller_utils.go:1028] Caches are synced for job controller
W0211 20:40:04.585] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0211 20:40:04.599] I0211 20:40:04.599282   57649 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0211 20:40:04.612] E0211 20:40:04.612107   57649 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0211 20:40:04.613] E0211 20:40:04.613081   57649 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0211 20:40:04.714] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
I0211 20:40:04.714] kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   35s
I0211 20:40:04.714] Recording: run_kubectl_version_tests
I0211 20:40:04.714] Running command: run_kubectl_version_tests
I0211 20:40:04.715] 
I0211 20:40:04.715] +++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 28 lines ...
I0211 20:40:05.421] Successful: --client --output json has correct client info
I0211 20:40:05.428] (BSuccessful: --client --output json has no server info
I0211 20:40:05.430] (B+++ [0211 20:40:05] Testing kubectl version: compare json output using additional --short flag
I0211 20:40:05.579] Successful: --short --output client json info is equal to non short result
I0211 20:40:05.586] (BSuccessful: --short --output server json info is equal to non short result
I0211 20:40:05.589] (B+++ [0211 20:40:05] Testing kubectl version: compare json output with yaml output
W0211 20:40:05.690] E0211 20:40:05.601542   57649 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 20:40:05.690] I0211 20:40:05.677882   57649 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 20:40:05.779] I0211 20:40:05.778234   57649 controller_utils.go:1028] Caches are synced for garbage collector controller
I0211 20:40:05.879] Successful: --output json/yaml has identical information
I0211 20:40:05.880] (B+++ exit code: 0
I0211 20:40:05.880] Recording: run_kubectl_config_set_tests
I0211 20:40:05.880] Running command: run_kubectl_config_set_tests
... skipping 41 lines ...
I0211 20:40:08.477] +++ working dir: /go/src/k8s.io/kubernetes
I0211 20:40:08.481] +++ command: run_RESTMapper_evaluation_tests
I0211 20:40:08.492] +++ [0211 20:40:08] Creating namespace namespace-1549917608-18779
I0211 20:40:08.566] namespace/namespace-1549917608-18779 created
I0211 20:40:08.639] Context "test" modified.
I0211 20:40:08.647] +++ [0211 20:40:08] Testing RESTMapper
I0211 20:40:08.775] +++ [0211 20:40:08] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0211 20:40:08.791] +++ exit code: 0
I0211 20:40:08.917] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0211 20:40:08.917] bindings                                                                      true         Binding
I0211 20:40:08.918] componentstatuses                 cs                                          false        ComponentStatus
I0211 20:40:08.918] configmaps                        cm                                          true         ConfigMap
I0211 20:40:08.918] endpoints                         ep                                          true         Endpoints
... skipping 597 lines ...
I0211 20:40:28.617] core.sh:223: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-secret
I0211 20:40:28.713] (Bcore.sh:224: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.type}}: test-type
I0211 20:40:28.811] (Bcore.sh:229: Successful get configmaps --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq $id_field \"test-configmap\" }}found{{end}}{{end}}:: :
I0211 20:40:28.891] (Bconfigmap/test-configmap created
I0211 20:40:28.993] core.sh:235: Successful get configmap/test-configmap --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-configmap
I0211 20:40:29.071] (Bpoddisruptionbudget.policy/test-pdb-1 created
W0211 20:40:29.171] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0211 20:40:29.171] error: setting 'all' parameter but found a non empty selector. 
W0211 20:40:29.172] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 20:40:29.172] I0211 20:40:29.067796   54274 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
I0211 20:40:29.272] core.sh:241: Successful get pdb/test-pdb-1 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 2
I0211 20:40:29.273] (Bpoddisruptionbudget.policy/test-pdb-2 created
I0211 20:40:29.353] core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
I0211 20:40:29.439] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0211 20:40:29.536] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0211 20:40:29.613] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0211 20:40:29.715] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0211 20:40:29.883] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:40:30.081] (Bpod/env-test-pod created
W0211 20:40:30.182] error: min-available and max-unavailable cannot be both specified
I0211 20:40:30.296] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0211 20:40:30.296] Name:               env-test-pod
I0211 20:40:30.297] Namespace:          test-kubectl-describe-pod
I0211 20:40:30.297] Priority:           0
I0211 20:40:30.297] PriorityClassName:  <none>
I0211 20:40:30.297] Node:               <none>
... skipping 145 lines ...
I0211 20:40:42.462] (Bservice "modified" deleted
I0211 20:40:42.550] replicationcontroller "modified" deleted
I0211 20:40:42.818] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:40:42.972] (Bpod/valid-pod created
I0211 20:40:43.080] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 20:40:43.231] (BSuccessful
I0211 20:40:43.232] message:Error from server: cannot restore map from string
I0211 20:40:43.232] has:cannot restore map from string
I0211 20:40:43.319] Successful
I0211 20:40:43.319] message:pod/valid-pod patched (no change)
I0211 20:40:43.319] has:patched (no change)
I0211 20:40:43.406] pod/valid-pod patched
I0211 20:40:43.503] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 2 lines ...
I0211 20:40:43.768] core.sh:461: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx2:
I0211 20:40:43.850] (Bpod/valid-pod patched
I0211 20:40:43.954] core.sh:465: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 20:40:44.042] (Bpod/valid-pod patched
I0211 20:40:44.145] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0211 20:40:44.223] (Bpod/valid-pod patched
W0211 20:40:44.324] E0211 20:40:43.223573   54274 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0211 20:40:44.425] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0211 20:40:44.498] (Bpod/valid-pod patched
I0211 20:40:44.613] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 20:40:44.800] (B+++ [0211 20:40:44] "kubectl patch with resourceVersion 495" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0211 20:40:45.054] pod "valid-pod" deleted
I0211 20:40:45.067] pod/valid-pod replaced
I0211 20:40:45.167] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0211 20:40:45.322] (BSuccessful
I0211 20:40:45.322] message:error: --grace-period must have --force specified
I0211 20:40:45.323] has:\-\-grace-period must have \-\-force specified
I0211 20:40:45.482] Successful
I0211 20:40:45.483] message:error: --timeout must have --force specified
I0211 20:40:45.483] has:\-\-timeout must have \-\-force specified
I0211 20:40:45.642] node/node-v1-test created
W0211 20:40:45.742] W0211 20:40:45.641742   57649 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0211 20:40:45.843] node/node-v1-test replaced
I0211 20:40:45.921] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0211 20:40:46.004] (Bnode "node-v1-test" deleted
I0211 20:40:46.107] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 20:40:46.388] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0211 20:40:47.370] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 16 lines ...
I0211 20:40:47.886] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0211 20:40:47.969] (Bpod/valid-pod labeled
W0211 20:40:48.070] Edit cancelled, no changes made.
W0211 20:40:48.070] Edit cancelled, no changes made.
W0211 20:40:48.070] Edit cancelled, no changes made.
W0211 20:40:48.071] Edit cancelled, no changes made.
W0211 20:40:48.071] error: 'name' already has a value (valid-pod), and --overwrite is false
I0211 20:40:48.171] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0211 20:40:48.187] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 20:40:48.278] (Bpod "valid-pod" force deleted
W0211 20:40:48.379] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 20:40:48.479] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:40:48.480] (B+++ [0211 20:40:48] Creating namespace namespace-1549917648-19856
... skipping 82 lines ...
I0211 20:40:55.526] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0211 20:40:55.528] +++ working dir: /go/src/k8s.io/kubernetes
I0211 20:40:55.531] +++ command: run_kubectl_create_error_tests
I0211 20:40:55.542] +++ [0211 20:40:55] Creating namespace namespace-1549917655-27606
I0211 20:40:55.617] namespace/namespace-1549917655-27606 created
I0211 20:40:55.692] Context "test" modified.
I0211 20:40:55.700] +++ [0211 20:40:55] Testing kubectl create with error
W0211 20:40:55.801] Error: required flag(s) "filename" not set
W0211 20:40:55.801] 
W0211 20:40:55.801] 
W0211 20:40:55.801] Examples:
W0211 20:40:55.801]   # Create a pod using the data in pod.json.
W0211 20:40:55.801]   kubectl create -f ./pod.json
W0211 20:40:55.801]   
... skipping 38 lines ...
W0211 20:40:55.807]   kubectl create -f FILENAME [options]
W0211 20:40:55.807] 
W0211 20:40:55.807] Use "kubectl <command> --help" for more information about a given command.
W0211 20:40:55.808] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0211 20:40:55.808] 
W0211 20:40:55.808] required flag(s) "filename" not set
I0211 20:40:55.939] +++ [0211 20:40:55] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0211 20:40:56.040] kubectl convert is DEPRECATED and will be removed in a future version.
W0211 20:40:56.040] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 20:40:56.141] +++ exit code: 0
I0211 20:40:56.182] Recording: run_kubectl_apply_tests
I0211 20:40:56.182] Running command: run_kubectl_apply_tests
I0211 20:40:56.205] 
... skipping 21 lines ...
W0211 20:40:58.368] I0211 20:40:57.827876   57649 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549917656-145", Name:"test-deployment-retainkeys", UID:"55612631-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"504", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-deployment-retainkeys-ddc987c6 to 1
W0211 20:40:58.369] I0211 20:40:57.830716   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917656-145", Name:"test-deployment-retainkeys-ddc987c6", UID:"55c351a3-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-ddc987c6-rnl84
I0211 20:40:58.469] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:40:58.569] (Bpod/selector-test-pod created
I0211 20:40:58.686] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 20:40:58.782] (BSuccessful
I0211 20:40:58.783] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 20:40:58.783] has:pods "selector-test-pod-dont-apply" not found
I0211 20:40:58.875] pod "selector-test-pod" deleted
I0211 20:40:58.991] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:40:59.220] (Bpod/test-pod created (server dry run)
I0211 20:40:59.322] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:40:59.481] (Bpod/test-pod created
... skipping 4 lines ...
W0211 20:41:00.424] I0211 20:41:00.423849   54274 clientconn.go:551] parsed scheme: ""
W0211 20:41:00.425] I0211 20:41:00.423887   54274 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 20:41:00.425] I0211 20:41:00.423934   54274 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 20:41:00.425] I0211 20:41:00.424083   54274 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 20:41:00.426] I0211 20:41:00.424789   54274 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 20:41:00.431] I0211 20:41:00.431307   54274 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0211 20:41:00.527] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0211 20:41:00.628] kind.mygroup.example.com/myobj created (server dry run)
I0211 20:41:00.629] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0211 20:41:00.733] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:41:00.901] (Bpod/a created
I0211 20:41:02.212] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0211 20:41:02.304] (BSuccessful
I0211 20:41:02.305] message:Error from server (NotFound): pods "b" not found
I0211 20:41:02.305] has:pods "b" not found
I0211 20:41:02.476] pod/b created
I0211 20:41:02.491] pod/a pruned
I0211 20:41:03.987] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0211 20:41:04.075] (BSuccessful
I0211 20:41:04.075] message:Error from server (NotFound): pods "a" not found
I0211 20:41:04.075] has:pods "a" not found
I0211 20:41:04.159] pod "b" deleted
I0211 20:41:04.259] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:41:04.419] (Bpod/a created
I0211 20:41:04.521] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0211 20:41:04.611] (BSuccessful
I0211 20:41:04.612] message:Error from server (NotFound): pods "b" not found
I0211 20:41:04.612] has:pods "b" not found
I0211 20:41:04.769] pod/b created
I0211 20:41:04.869] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0211 20:41:04.963] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0211 20:41:05.049] (Bpod "a" deleted
I0211 20:41:05.054] pod "b" deleted
I0211 20:41:05.232] Successful
I0211 20:41:05.232] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0211 20:41:05.232] has:all resources selected for prune without explicitly passing --all
I0211 20:41:05.394] pod/a created
I0211 20:41:05.400] pod/b created
I0211 20:41:05.409] service/prune-svc created
I0211 20:41:06.720] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0211 20:41:06.819] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 137 lines ...
I0211 20:41:18.747] Context "test" modified.
I0211 20:41:18.755] +++ [0211 20:41:18] Testing kubectl create filter
I0211 20:41:18.860] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:41:19.017] (Bpod/selector-test-pod created
I0211 20:41:19.132] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 20:41:19.229] (BSuccessful
I0211 20:41:19.229] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 20:41:19.229] has:pods "selector-test-pod-dont-apply" not found
I0211 20:41:19.319] pod "selector-test-pod" deleted
I0211 20:41:19.342] +++ exit code: 0
I0211 20:41:19.382] Recording: run_kubectl_apply_deployments_tests
I0211 20:41:19.382] Running command: run_kubectl_apply_deployments_tests
I0211 20:41:19.404] 
... skipping 38 lines ...
W0211 20:41:21.977] I0211 20:41:21.879166   57649 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549917679-20328", Name:"nginx", UID:"6418ae31-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0211 20:41:21.977] I0211 20:41:21.882108   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917679-20328", Name:"nginx-776cc67f78", UID:"641937ce-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-cpjk6
W0211 20:41:21.978] I0211 20:41:21.884036   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917679-20328", Name:"nginx-776cc67f78", UID:"641937ce-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-kp5d5
W0211 20:41:21.978] I0211 20:41:21.886058   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917679-20328", Name:"nginx-776cc67f78", UID:"641937ce-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-kssp6
I0211 20:41:22.079] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0211 20:41:26.208] (BSuccessful
I0211 20:41:26.208] message:Error from server (Conflict): error when applying patch:
I0211 20:41:26.209] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549917679-20328\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0211 20:41:26.209] to:
I0211 20:41:26.209] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0211 20:41:26.209] Name: "nginx", Namespace: "namespace-1549917679-20328"
I0211 20:41:26.210] Object: &{map["metadata":map["name":"nginx" "namespace":"namespace-1549917679-20328" "uid":"6418ae31-2e3d-11e9-bc7d-0242ac110002" "generation":'\x01' "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549917679-20328\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1549917679-20328/deployments/nginx" "resourceVersion":"714" "creationTimestamp":"2019-02-11T20:41:21Z"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[]]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01'] "type":"RollingUpdate"] "revisionHistoryLimit":%!q(int64=+2147483647)] "status":map["updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["lastTransitionTime":"2019-02-11T20:41:21Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2019-02-11T20:41:21Z"]] "observedGeneration":'\x01' "replicas":'\x03'] "kind":"Deployment" "apiVersion":"extensions/v1beta1"]}
I0211 20:41:26.211] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0211 20:41:26.211] has:Error from server (Conflict)
W0211 20:41:30.427] E0211 20:41:30.426454   57649 replica_set.go:450] Sync "namespace-1549917679-20328/nginx-776cc67f78" failed with Operation cannot be fulfilled on replicasets.apps "nginx-776cc67f78": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1549917679-20328/nginx-776cc67f78, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 641937ce-2e3d-11e9-bc7d-0242ac110002, UID in object meta: 
I0211 20:41:31.412] deployment.extensions/nginx configured
I0211 20:41:31.512] Successful
I0211 20:41:31.512] message:        "name": "nginx2"
I0211 20:41:31.513]           "name": "nginx2"
I0211 20:41:31.513] has:"name": "nginx2"
W0211 20:41:31.614] I0211 20:41:31.415352   57649 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549917679-20328", Name:"nginx", UID:"69c7b3a4-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7bd4fbc645 to 3
... skipping 141 lines ...
I0211 20:41:38.771] +++ [0211 20:41:38] Creating namespace namespace-1549917698-6177
I0211 20:41:38.846] namespace/namespace-1549917698-6177 created
I0211 20:41:38.922] Context "test" modified.
I0211 20:41:38.930] +++ [0211 20:41:38] Testing kubectl get
I0211 20:41:39.034] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:41:39.131] (BSuccessful
I0211 20:41:39.132] message:Error from server (NotFound): pods "abc" not found
I0211 20:41:39.132] has:pods "abc" not found
I0211 20:41:39.226] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:41:39.317] (BSuccessful
I0211 20:41:39.317] message:Error from server (NotFound): pods "abc" not found
I0211 20:41:39.317] has:pods "abc" not found
I0211 20:41:39.412] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:41:39.500] (BSuccessful
I0211 20:41:39.501] message:{
I0211 20:41:39.501]     "apiVersion": "v1",
I0211 20:41:39.501]     "items": [],
... skipping 23 lines ...
I0211 20:41:39.861] has not:No resources found
I0211 20:41:39.947] Successful
I0211 20:41:39.948] message:NAME
I0211 20:41:39.948] has not:No resources found
I0211 20:41:40.040] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:41:40.157] (BSuccessful
I0211 20:41:40.158] message:error: the server doesn't have a resource type "foobar"
I0211 20:41:40.158] has not:No resources found
I0211 20:41:40.245] Successful
I0211 20:41:40.245] message:No resources found.
I0211 20:41:40.245] has:No resources found
I0211 20:41:40.332] Successful
I0211 20:41:40.332] message:
I0211 20:41:40.332] has not:No resources found
I0211 20:41:40.419] Successful
I0211 20:41:40.420] message:No resources found.
I0211 20:41:40.420] has:No resources found
I0211 20:41:40.511] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:41:40.603] (BSuccessful
I0211 20:41:40.603] message:Error from server (NotFound): pods "abc" not found
I0211 20:41:40.603] has:pods "abc" not found
I0211 20:41:40.605] FAIL!
I0211 20:41:40.605] message:Error from server (NotFound): pods "abc" not found
I0211 20:41:40.605] has not:List
I0211 20:41:40.606] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0211 20:41:40.721] Successful
I0211 20:41:40.721] message:I0211 20:41:40.670035   69946 loader.go:359] Config loaded from file /tmp/tmp.fcjDE0yYdj/.kube/config
I0211 20:41:40.722] I0211 20:41:40.671621   69946 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0211 20:41:40.722] I0211 20:41:40.694518   69946 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 653 lines ...
I0211 20:41:44.230] }
I0211 20:41:44.319] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 20:41:44.578] (B<no value>Successful
I0211 20:41:44.578] message:valid-pod:
I0211 20:41:44.579] has:valid-pod:
I0211 20:41:44.668] Successful
I0211 20:41:44.669] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0211 20:41:44.669] 	template was:
I0211 20:41:44.669] 		{.missing}
I0211 20:41:44.669] 	object given to jsonpath engine was:
I0211 20:41:44.670] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"valid-pod", "namespace":"namespace-1549917703-9735", "selfLink":"/api/v1/namespaces/namespace-1549917703-9735/pods/valid-pod", "uid":"715c9359-2e3d-11e9-bc7d-0242ac110002", "resourceVersion":"809", "creationTimestamp":"2019-02-11T20:41:44Z", "labels":map[string]interface {}{"name":"valid-pod"}}, "spec":map[string]interface {}{"restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname"}}}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0211 20:41:44.670] has:missing is not found
I0211 20:41:44.757] Successful
I0211 20:41:44.757] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0211 20:41:44.757] 	template was:
I0211 20:41:44.758] 		{{.missing}}
I0211 20:41:44.758] 	raw data was:
I0211 20:41:44.758] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-02-11T20:41:44Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1549917703-9735","resourceVersion":"809","selfLink":"/api/v1/namespaces/namespace-1549917703-9735/pods/valid-pod","uid":"715c9359-2e3d-11e9-bc7d-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0211 20:41:44.759] 	object given to template engine was:
I0211 20:41:44.759] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-02-11T20:41:44Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1549917703-9735 resourceVersion:809 selfLink:/api/v1/namespaces/namespace-1549917703-9735/pods/valid-pod uid:715c9359-2e3d-11e9-bc7d-0242ac110002] spec:map[terminationGracePeriodSeconds:30 containers:[map[resources:map[requests:map[cpu:1 memory:512Mi] limits:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[]] status:map[phase:Pending qosClass:Guaranteed]]
I0211 20:41:44.759] has:map has no entry for key "missing"
W0211 20:41:44.860] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0211 20:41:45.841] E0211 20:41:45.840960   70334 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0211 20:41:45.942] Successful
I0211 20:41:45.942] message:NAME        READY   STATUS    RESTARTS   AGE
I0211 20:41:45.943] valid-pod   0/1     Pending   0          0s
I0211 20:41:45.943] has:STATUS
I0211 20:41:45.943] Successful
... skipping 80 lines ...
I0211 20:41:48.135]   terminationGracePeriodSeconds: 30
I0211 20:41:48.135] status:
I0211 20:41:48.135]   phase: Pending
I0211 20:41:48.135]   qosClass: Guaranteed
I0211 20:41:48.135] has:name: valid-pod
I0211 20:41:48.135] Successful
I0211 20:41:48.135] message:Error from server (NotFound): pods "invalid-pod" not found
I0211 20:41:48.136] has:"invalid-pod" not found
I0211 20:41:48.210] pod "valid-pod" deleted
I0211 20:41:48.313] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:41:48.475] (Bpod/redis-master created
I0211 20:41:48.478] pod/valid-pod created
I0211 20:41:48.587] Successful
... skipping 254 lines ...
I0211 20:41:53.384] Running command: run_create_secret_tests
I0211 20:41:53.407] 
I0211 20:41:53.410] +++ Running case: test-cmd.run_create_secret_tests 
I0211 20:41:53.412] +++ working dir: /go/src/k8s.io/kubernetes
I0211 20:41:53.415] +++ command: run_create_secret_tests
I0211 20:41:53.519] Successful
I0211 20:41:53.519] message:Error from server (NotFound): secrets "mysecret" not found
I0211 20:41:53.520] has:secrets "mysecret" not found
I0211 20:41:53.702] Successful
I0211 20:41:53.703] message:Error from server (NotFound): secrets "mysecret" not found
I0211 20:41:53.703] has:secrets "mysecret" not found
I0211 20:41:53.705] Successful
I0211 20:41:53.705] message:user-specified
I0211 20:41:53.705] has:user-specified
I0211 20:41:53.790] Successful
I0211 20:41:53.879] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"772b7107-2e3d-11e9-bc7d-0242ac110002","resourceVersion":"884","creationTimestamp":"2019-02-11T20:41:53Z"}}
... skipping 99 lines ...
I0211 20:41:57.074] has:Timeout exceeded while reading body
I0211 20:41:57.175] Successful
I0211 20:41:57.175] message:NAME        READY   STATUS    RESTARTS   AGE
I0211 20:41:57.175] valid-pod   0/1     Pending   0          2s
I0211 20:41:57.176] has:valid-pod
I0211 20:41:57.258] Successful
I0211 20:41:57.259] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0211 20:41:57.259] has:Invalid timeout value
I0211 20:41:57.351] pod "valid-pod" deleted
I0211 20:41:57.378] +++ exit code: 0
I0211 20:41:57.417] Recording: run_crd_tests
I0211 20:41:57.417] Running command: run_crd_tests
I0211 20:41:57.442] 
... skipping 167 lines ...
I0211 20:42:02.445] foo.company.com/test patched
I0211 20:42:02.538] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0211 20:42:02.623] (Bfoo.company.com/test patched
I0211 20:42:02.716] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0211 20:42:02.798] (Bfoo.company.com/test patched
I0211 20:42:02.893] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0211 20:42:03.053] (B+++ [0211 20:42:03] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0211 20:42:03.120] {
I0211 20:42:03.121]     "apiVersion": "company.com/v1",
I0211 20:42:03.121]     "kind": "Foo",
I0211 20:42:03.121]     "metadata": {
I0211 20:42:03.121]         "annotations": {
I0211 20:42:03.122]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
I0211 20:42:05.627] has:bar.company.com/test
I0211 20:42:05.707] bar.company.com "test" deleted
W0211 20:42:05.808] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 73003 Killed                  while [ ${tries} -lt 10 ]; do
W0211 20:42:05.808]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0211 20:42:05.808] done
W0211 20:42:05.809] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 73002 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0211 20:42:05.911] E0211 20:42:05.910810   57649 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"]
W0211 20:42:06.241] I0211 20:42:06.240290   57649 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 20:42:06.241] I0211 20:42:06.241369   54274 clientconn.go:551] parsed scheme: ""
W0211 20:42:06.242] I0211 20:42:06.241427   54274 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 20:42:06.242] I0211 20:42:06.241464   54274 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 20:42:06.242] I0211 20:42:06.241503   54274 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 20:42:06.242] I0211 20:42:06.241816   54274 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 62 lines ...
I0211 20:42:12.516] (Bnamespace/non-native-resources created
I0211 20:42:12.683] bar.company.com/test created
I0211 20:42:12.785] crd.sh:456: Successful get bars {{len .items}}: 1
I0211 20:42:12.867] (Bnamespace "non-native-resources" deleted
I0211 20:42:18.124] crd.sh:459: Successful get bars {{len .items}}: 0
I0211 20:42:18.295] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0211 20:42:18.396] Error from server (NotFound): namespaces "non-native-resources" not found
I0211 20:42:18.497] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0211 20:42:18.507] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0211 20:42:18.611] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0211 20:42:18.641] +++ exit code: 0
I0211 20:42:18.679] Recording: run_cmd_with_img_tests
I0211 20:42:18.680] Running command: run_cmd_with_img_tests
... skipping 7 lines ...
I0211 20:42:18.894] +++ [0211 20:42:18] Testing cmd with image
I0211 20:42:18.990] Successful
I0211 20:42:18.991] message:deployment.apps/test1 created
I0211 20:42:18.991] has:deployment.apps/test1 created
I0211 20:42:19.070] deployment.extensions "test1" deleted
I0211 20:42:19.153] Successful
I0211 20:42:19.153] message:error: Invalid image name "InvalidImageName": invalid reference format
I0211 20:42:19.153] has:error: Invalid image name "InvalidImageName": invalid reference format
I0211 20:42:19.169] +++ exit code: 0
I0211 20:42:19.241] +++ [0211 20:42:19] Testing recursive resources
I0211 20:42:19.247] +++ [0211 20:42:19] Creating namespace namespace-1549917739-17744
I0211 20:42:19.318] namespace/namespace-1549917739-17744 created
I0211 20:42:19.395] Context "test" modified.
I0211 20:42:19.495] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:42:19.760] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:19.763] (BSuccessful
I0211 20:42:19.763] message:pod/busybox0 created
I0211 20:42:19.763] pod/busybox1 created
I0211 20:42:19.763] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 20:42:19.763] has:error validating data: kind not set
I0211 20:42:19.860] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:20.045] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0211 20:42:20.047] (BSuccessful
I0211 20:42:20.047] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:20.047] has:Object 'Kind' is missing
I0211 20:42:20.138] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:20.398] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 20:42:20.400] (BSuccessful
I0211 20:42:20.400] message:pod/busybox0 replaced
I0211 20:42:20.400] pod/busybox1 replaced
I0211 20:42:20.400] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 20:42:20.401] has:error validating data: kind not set
I0211 20:42:20.487] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:20.600] (BSuccessful
I0211 20:42:20.600] message:Name:               busybox0
I0211 20:42:20.600] Namespace:          namespace-1549917739-17744
I0211 20:42:20.600] Priority:           0
I0211 20:42:20.601] PriorityClassName:  <none>
... skipping 159 lines ...
I0211 20:42:20.617] has:Object 'Kind' is missing
I0211 20:42:20.705] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:20.895] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0211 20:42:20.897] (BSuccessful
I0211 20:42:20.897] message:pod/busybox0 annotated
I0211 20:42:20.898] pod/busybox1 annotated
I0211 20:42:20.898] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:20.898] has:Object 'Kind' is missing
I0211 20:42:20.995] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:21.304] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 20:42:21.306] (BSuccessful
I0211 20:42:21.307] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 20:42:21.307] pod/busybox0 configured
I0211 20:42:21.307] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 20:42:21.307] pod/busybox1 configured
I0211 20:42:21.308] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 20:42:21.308] has:error validating data: kind not set
I0211 20:42:21.396] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:42:21.566] (Bdeployment.apps/nginx created
I0211 20:42:21.670] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0211 20:42:21.759] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 20:42:21.942] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0211 20:42:21.944] (BSuccessful
... skipping 42 lines ...
I0211 20:42:22.025] deployment.extensions "nginx" deleted
I0211 20:42:22.130] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:22.299] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:22.302] (BSuccessful
I0211 20:42:22.302] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0211 20:42:22.302] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 20:42:22.303] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:22.303] has:Object 'Kind' is missing
I0211 20:42:22.395] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:22.482] (BSuccessful
I0211 20:42:22.482] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:22.482] has:busybox0:busybox1:
I0211 20:42:22.484] Successful
I0211 20:42:22.484] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:22.484] has:Object 'Kind' is missing
I0211 20:42:22.579] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:22.672] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:22.763] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0211 20:42:22.766] (BSuccessful
I0211 20:42:22.766] message:pod/busybox0 labeled
I0211 20:42:22.766] pod/busybox1 labeled
I0211 20:42:22.766] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:22.766] has:Object 'Kind' is missing
I0211 20:42:22.863] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:22.949] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:23.050] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0211 20:42:23.052] (BSuccessful
I0211 20:42:23.053] message:pod/busybox0 patched
I0211 20:42:23.053] pod/busybox1 patched
I0211 20:42:23.053] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:23.053] has:Object 'Kind' is missing
I0211 20:42:23.143] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:23.326] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:42:23.328] (BSuccessful
I0211 20:42:23.328] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 20:42:23.328] pod "busybox0" force deleted
I0211 20:42:23.328] pod "busybox1" force deleted
I0211 20:42:23.329] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 20:42:23.329] has:Object 'Kind' is missing
I0211 20:42:23.421] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:42:23.572] (Breplicationcontroller/busybox0 created
I0211 20:42:23.576] replicationcontroller/busybox1 created
I0211 20:42:23.679] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:23.775] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:23.870] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 20:42:23.963] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 20:42:24.152] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 20:42:24.245] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 20:42:24.248] (BSuccessful
I0211 20:42:24.248] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0211 20:42:24.248] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0211 20:42:24.248] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:24.249] has:Object 'Kind' is missing
I0211 20:42:24.329] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0211 20:42:24.416] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0211 20:42:24.518] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:24.613] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 20:42:24.707] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 20:42:24.905] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 20:42:24.998] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 20:42:25.001] (BSuccessful
I0211 20:42:25.001] message:service/busybox0 exposed
I0211 20:42:25.001] service/busybox1 exposed
I0211 20:42:25.002] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:25.002] has:Object 'Kind' is missing
I0211 20:42:25.097] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:25.187] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 20:42:25.282] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 20:42:25.483] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0211 20:42:25.577] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0211 20:42:25.580] (BSuccessful
I0211 20:42:25.580] message:replicationcontroller/busybox0 scaled
I0211 20:42:25.580] replicationcontroller/busybox1 scaled
I0211 20:42:25.580] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:25.581] has:Object 'Kind' is missing
I0211 20:42:25.673] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:25.854] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:42:25.856] (BSuccessful
I0211 20:42:25.856] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 20:42:25.856] replicationcontroller "busybox0" force deleted
I0211 20:42:25.857] replicationcontroller "busybox1" force deleted
I0211 20:42:25.857] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:25.857] has:Object 'Kind' is missing
I0211 20:42:25.951] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:42:26.109] (Bdeployment.apps/nginx1-deployment created
I0211 20:42:26.115] deployment.apps/nginx0-deployment created
W0211 20:42:26.215] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0211 20:42:26.216] I0211 20:42:18.978731   57649 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549917738-3333", Name:"test1", UID:"8620e09d-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1002", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-848d5d4b47 to 1
... skipping 2 lines ...
W0211 20:42:26.217] I0211 20:42:21.573169   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917739-17744", Name:"nginx-5f7cff5b56", UID:"87ad3e77-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-wsd6z
W0211 20:42:26.217] I0211 20:42:21.576648   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917739-17744", Name:"nginx-5f7cff5b56", UID:"87ad3e77-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-sh9h6
W0211 20:42:26.218] I0211 20:42:21.576914   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917739-17744", Name:"nginx-5f7cff5b56", UID:"87ad3e77-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-cwg6p
W0211 20:42:26.218] kubectl convert is DEPRECATED and will be removed in a future version.
W0211 20:42:26.218] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0211 20:42:26.218] I0211 20:42:23.010106   57649 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0211 20:42:26.219] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 20:42:26.219] I0211 20:42:23.575883   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549917739-17744", Name:"busybox0", UID:"88dece1c-2e3d-11e9-bc7d-0242ac110002", APIVersion:"v1", ResourceVersion:"1059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-j2tdh
W0211 20:42:26.219] I0211 20:42:23.578347   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549917739-17744", Name:"busybox1", UID:"88df78ff-2e3d-11e9-bc7d-0242ac110002", APIVersion:"v1", ResourceVersion:"1061", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-vvhp5
W0211 20:42:26.220] I0211 20:42:25.380570   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549917739-17744", Name:"busybox0", UID:"88dece1c-2e3d-11e9-bc7d-0242ac110002", APIVersion:"v1", ResourceVersion:"1080", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-gshg4
W0211 20:42:26.220] I0211 20:42:25.390308   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549917739-17744", Name:"busybox1", UID:"88df78ff-2e3d-11e9-bc7d-0242ac110002", APIVersion:"v1", ResourceVersion:"1084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xltfc
W0211 20:42:26.220] I0211 20:42:26.111963   57649 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549917739-17744", Name:"nginx1-deployment", UID:"8a61a92d-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7c76c6cbb8 to 2
W0211 20:42:26.221] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 20:42:26.221] I0211 20:42:26.116084   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917739-17744", Name:"nginx1-deployment-7c76c6cbb8", UID:"8a6243e0-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1101", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-98246
W0211 20:42:26.222] I0211 20:42:26.116450   57649 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549917739-17744", Name:"nginx0-deployment", UID:"8a6271aa-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1102", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-7bb85585d7 to 2
W0211 20:42:26.222] I0211 20:42:26.119464   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917739-17744", Name:"nginx0-deployment-7bb85585d7", UID:"8a6305d9-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-559tk
W0211 20:42:26.222] I0211 20:42:26.120245   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917739-17744", Name:"nginx1-deployment-7c76c6cbb8", UID:"8a6243e0-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1101", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-6gbpf
W0211 20:42:26.223] I0211 20:42:26.122781   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549917739-17744", Name:"nginx0-deployment-7bb85585d7", UID:"8a6305d9-2e3d-11e9-bc7d-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-brn6z
I0211 20:42:26.323] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0211 20:42:26.324] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 20:42:26.526] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 20:42:26.528] (BSuccessful
I0211 20:42:26.528] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0211 20:42:26.528] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0211 20:42:26.529] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 20:42:26.529] has:Object 'Kind' is missing
I0211 20:42:26.618] deployment.apps/nginx1-deployment paused
I0211 20:42:26.621] deployment.apps/nginx0-deployment paused
I0211 20:42:26.723] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0211 20:42:26.725] (BSuccessful
I0211 20:42:26.725] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0211 20:42:27.039] 1         <none>
I0211 20:42:27.039] 
I0211 20:42:27.039] deployment.apps/nginx0-deployment 
I0211 20:42:27.040] REVISION  CHANGE-CAUSE
I0211 20:42:27.040] 1         <none>
I0211 20:42:27.040] 
I0211 20:42:27.040] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 20:42:27.040] has:nginx0-deployment
I0211 20:42:27.041] Successful
I0211 20:42:27.041] message:deployment.apps/nginx1-deployment 
I0211 20:42:27.042] REVISION  CHANGE-CAUSE
I0211 20:42:27.042] 1         <none>
I0211 20:42:27.042] 
I0211 20:42:27.042] deployment.apps/nginx0-deployment 
I0211 20:42:27.042] REVISION  CHANGE-CAUSE
I0211 20:42:27.042] 1         <none>
I0211 20:42:27.043] 
I0211 20:42:27.043] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 20:42:27.043] has:nginx1-deployment
I0211 20:42:27.044] Successful
I0211 20:42:27.044] message:deployment.apps/nginx1-deployment 
I0211 20:42:27.044] REVISION  CHANGE-CAUSE
I0211 20:42:27.044] 1         <none>
I0211 20:42:27.044] 
I0211 20:42:27.044] deployment.apps/nginx0-deployment 
I0211 20:42:27.044] REVISION  CHANGE-CAUSE
I0211 20:42:27.044] 1         <none>
I0211 20:42:27.045] 
I0211 20:42:27.045] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 20:42:27.045] has:Object 'Kind' is missing
I0211 20:42:27.123] deployment.apps "nginx1-deployment" force deleted
I0211 20:42:27.128] deployment.apps "nginx0-deployment" force deleted
W0211 20:42:27.228] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 20:42:27.229] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 20:42:28.232] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:42:28.381] (Breplicationcontroller/busybox0 created
I0211 20:42:28.386] replicationcontroller/busybox1 created
I0211 20:42:28.487] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 20:42:28.576] (BSuccessful
I0211 20:42:28.577] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0211 20:42:28.578] message:no rollbacker has been implemented for "ReplicationController"
I0211 20:42:28.579] no rollbacker has been implemented for "ReplicationController"
I0211 20:42:28.579] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:28.579] has:Object 'Kind' is missing
I0211 20:42:28.673] Successful
I0211 20:42:28.674] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:28.674] error: replicationcontrollers "busybox0" pausing is not supported
I0211 20:42:28.674] error: replicationcontrollers "busybox1" pausing is not supported
I0211 20:42:28.674] has:Object 'Kind' is missing
I0211 20:42:28.675] Successful
I0211 20:42:28.676] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:28.676] error: replicationcontrollers "busybox0" pausing is not supported
I0211 20:42:28.676] error: replicationcontrollers "busybox1" pausing is not supported
I0211 20:42:28.676] has:replicationcontrollers "busybox0" pausing is not supported
I0211 20:42:28.678] Successful
I0211 20:42:28.678] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:28.678] error: replicationcontrollers "busybox0" pausing is not supported
I0211 20:42:28.679] error: replicationcontrollers "busybox1" pausing is not supported
I0211 20:42:28.679] has:replicationcontrollers "busybox1" pausing is not supported
I0211 20:42:28.769] Successful
I0211 20:42:28.770] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:28.770] error: replicationcontrollers "busybox0" resuming is not supported
I0211 20:42:28.770] error: replicationcontrollers "busybox1" resuming is not supported
I0211 20:42:28.770] has:Object 'Kind' is missing
I0211 20:42:28.771] Successful
I0211 20:42:28.772] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:28.772] error: replicationcontrollers "busybox0" resuming is not supported
I0211 20:42:28.772] error: replicationcontrollers "busybox1" resuming is not supported
I0211 20:42:28.772] has:replicationcontrollers "busybox0" resuming is not supported
I0211 20:42:28.774] Successful
I0211 20:42:28.774] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:28.775] error: replicationcontrollers "busybox0" resuming is not supported
I0211 20:42:28.775] error: replicationcontrollers "busybox1" resuming is not supported
I0211 20:42:28.775] has:replicationcontrollers "busybox0" resuming is not supported
I0211 20:42:28.848] replicationcontroller "busybox0" force deleted
I0211 20:42:28.856] replicationcontroller "busybox1" force deleted
W0211 20:42:28.957] I0211 20:42:28.383987   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549917739-17744", Name:"busybox0", UID:"8bbc8060-2e3d-11e9-bc7d-0242ac110002", APIVersion:"v1", ResourceVersion:"1149", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-z8rmg
W0211 20:42:28.958] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 20:42:28.958] I0211 20:42:28.389164   57649 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549917739-17744", Name:"busybox1", UID:"8bbd44c8-2e3d-11e9-bc7d-0242ac110002", APIVersion:"v1", ResourceVersion:"1151", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-4ndmb
W0211 20:42:28.958] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 20:42:28.959] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 20:42:29.863] Recording: run_namespace_tests
I0211 20:42:29.864] Running command: run_namespace_tests
I0211 20:42:29.887] 
I0211 20:42:29.889] +++ Running case: test-cmd.run_namespace_tests 
I0211 20:42:29.891] +++ working dir: /go/src/k8s.io/kubernetes
I0211 20:42:29.894] +++ command: run_namespace_tests
I0211 20:42:29.903] +++ [0211 20:42:29] Testing kubectl(v1:namespaces)
I0211 20:42:29.979] namespace/my-namespace created
I0211 20:42:30.081] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0211 20:42:30.160] (Bnamespace "my-namespace" deleted
I0211 20:42:35.281] namespace/my-namespace condition met
I0211 20:42:35.370] Successful
I0211 20:42:35.370] message:Error from server (NotFound): namespaces "my-namespace" not found
I0211 20:42:35.370] has: not found
I0211 20:42:35.470] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0211 20:42:35.542] (Bnamespace/other created
I0211 20:42:35.638] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0211 20:42:35.730] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:42:35.886] (Bpod/valid-pod created
I0211 20:42:35.991] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 20:42:36.087] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 20:42:36.165] (BSuccessful
I0211 20:42:36.165] message:error: a resource cannot be retrieved by name across all namespaces
I0211 20:42:36.165] has:a resource cannot be retrieved by name across all namespaces
I0211 20:42:36.259] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 20:42:36.337] (Bpod "valid-pod" force deleted
I0211 20:42:36.441] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 20:42:36.520] (Bnamespace "other" deleted
W0211 20:42:36.621] E0211 20:42:35.963402   57649 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 20:42:36.622] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 20:42:36.622] I0211 20:42:36.392987   57649 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 20:42:36.622] I0211 20:42:36.493365   57649 controller_utils.go:1028] Caches are synced for garbage collector controller
W0211 20:42:39.049] I0211 20:42:39.048681   57649 horizontal.go:320] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1549917739-17744
W0211 20:42:39.053] I0211 20:42:39.053354   57649 horizontal.go:320] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1549917739-17744
W0211 20:42:40.278] I0211 20:42:40.277556   57649 namespace_controller.go:171] Namespace has been deleted my-namespace
... skipping 112 lines ...
I0211 20:42:57.137] +++ command: run_client_config_tests
I0211 20:42:57.151] +++ [0211 20:42:57] Creating namespace namespace-1549917777-6423
I0211 20:42:57.225] namespace/namespace-1549917777-6423 created
I0211 20:42:57.298] Context "test" modified.
I0211 20:42:57.306] +++ [0211 20:42:57] Testing client config
I0211 20:42:57.376] Successful
I0211 20:42:57.377] message:error: stat missing: no such file or directory
I0211 20:42:57.377] has:missing: no such file or directory
I0211 20:42:57.449] Successful
I0211 20:42:57.450] message:error: stat missing: no such file or directory
I0211 20:42:57.450] has:missing: no such file or directory
I0211 20:42:57.520] Successful
I0211 20:42:57.520] message:error: stat missing: no such file or directory
I0211 20:42:57.520] has:missing: no such file or directory
I0211 20:42:57.592] Successful
I0211 20:42:57.592] message:Error in configuration: context was not found for specified context: missing-context
I0211 20:42:57.593] has:context was not found for specified context: missing-context
I0211 20:42:57.663] Successful
I0211 20:42:57.663] message:error: no server found for cluster "missing-cluster"
I0211 20:42:57.663] has:no server found for cluster "missing-cluster"
I0211 20:42:57.737] Successful
I0211 20:42:57.737] message:error: auth info "missing-user" does not exist
I0211 20:42:57.737] has:auth info "missing-user" does not exist
I0211 20:42:57.880] Successful
I0211 20:42:57.881] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0211 20:42:57.881] has:Error loading config file
I0211 20:42:57.952] Successful
I0211 20:42:57.953] message:error: stat missing-config: no such file or directory
I0211 20:42:57.953] has:no such file or directory
I0211 20:42:57.967] +++ exit code: 0
I0211 20:42:58.003] Recording: run_service_accounts_tests
I0211 20:42:58.003] Running command: run_service_accounts_tests
I0211 20:42:58.024] 
I0211 20:42:58.029] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0211 20:43:04.841] Labels:                        run=pi
I0211 20:43:04.841] Annotations:                   <none>
I0211 20:43:04.842] Schedule:                      59 23 31 2 *
I0211 20:43:04.842] Concurrency Policy:            Allow
I0211 20:43:04.842] Suspend:                       False
I0211 20:43:04.842] Successful Job History Limit:  824640935960
I0211 20:43:04.842] Failed Job History Limit:      1
I0211 20:43:04.843] Starting Deadline Seconds:     <unset>
I0211 20:43:04.843] Selector:                      <unset>
I0211 20:43:04.843] Parallelism:                   <unset>
I0211 20:43:04.843] Completions:                   <unset>
I0211 20:43:04.843] Pod Template:
I0211 20:43:04.843]   Labels:  run=pi
... skipping 31 lines ...
I0211 20:43:05.382]                 job-name=test-job
I0211 20:43:05.382]                 run=pi
I0211 20:43:05.383] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0211 20:43:05.383] Parallelism:    1
I0211 20:43:05.383] Completions:    1
I0211 20:43:05.383] Start Time:     Mon, 11 Feb 2019 20:43:05 +0000
I0211 20:43:05.383] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0211 20:43:05.383] Pod Template:
I0211 20:43:05.383]   Labels:  controller-uid=a1a08a02-2e3d-11e9-bc7d-0242ac110002
I0211 20:43:05.383]            job-name=test-job
I0211 20:43:05.383]            run=pi
I0211 20:43:05.384]   Containers:
I0211 20:43:05.384]    pi:
... skipping 329 lines ...
I0211 20:43:15.180]   selector:
I0211 20:43:15.180]     role: padawan
I0211 20:43:15.180]   sessionAffinity: None
I0211 20:43:15.180]   type: ClusterIP
I0211 20:43:15.180] status:
I0211 20:43:15.180]   loadBalancer: {}
W0211 20:43:15.281] error: you must specify resources by --filename when --local is set.
W0211 20:43:15.281] Example resource specifications include:
W0211 20:43:15.281]    '-f rsrc.yaml'
W0211 20:43:15.281]    '--filename=rsrc.json'
I0211 20:43:15.382] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0211 20:43:15.524] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0211 20:43:15.610] (Bservice "redis-master" deleted
... skipping 94 lines ...
I0211 20:43:23.631] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 20:43:23.790] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0211 20:43:23.973] (Bdaemonset.extensions/bind rolled back
I0211 20:43:24.121] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 20:43:24.279] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 20:43:24.412] (BSuccessful
I0211 20:43:24.413] message:error: unable to find specified revision 1000000 in history
I0211 20:43:24.413] has:unable to find specified revision
I0211 20:43:24.508] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 20:43:24.607] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 20:43:24.708] (Bdaemonset.extensions/bind rolled back
I0211 20:43:24.810] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0211 20:43:24.907] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 25 lines ...
I0211 20:43:26.322] Namespace:    namespace-1549917805-32512
I0211 20:43:26.322] Selector:     app=guestbook,tier=frontend
I0211 20:43:26.323] Labels:       app=guestbook
I0211 20:43:26.323]               tier=frontend
I0211 20:43:26.323] Annotations:  <none>
I0211 20:43:26.323] Replicas:     3 current / 3 desired
I0211 20:43:26.323] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 20:43:26.324] Pod Template:
I0211 20:43:26.324]   Labels:  app=guestbook
I0211 20:43:26.324]            tier=frontend
I0211 20:43:26.324]   Containers:
I0211 20:43:26.324]    php-redis:
I0211 20:43:26.324]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 20:43:26.439] Namespace:    namespace-1549917805-32512
I0211 20:43:26.439] Selector:     app=guestbook,tier=frontend
I0211 20:43:26.439] Labels:       app=guestbook
I0211 20:43:26.440]               tier=frontend
I0211 20:43:26.440] Annotations:  <none>
I0211 20:43:26.440] Replicas:     3 current / 3 desired
I0211 20:43:26.440] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 20:43:26.440] Pod Template:
I0211 20:43:26.440]   Labels:  app=guestbook
I0211 20:43:26.440]            tier=frontend
I0211 20:43:26.440]   Containers:
I0211 20:43:26.441]    php-redis:
I0211 20:43:26.441]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 21 lines ...
I0211 20:43:26.646] Namespace:    namespace-1549917805-32512
I0211 20:43:26.647] Selector:     app=guestbook,tier=frontend
I0211 20:43:26.647] Labels:       app=guestbook
I0211 20:43:26.647]               tier=frontend
I0211 20:43:26.647] Annotations:  <none>
I0211 20:43:26.647] Replicas:     3 current / 3 desired
I0211 20:43:26.648] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 20:43:26.648] Pod Template:
I0211 20:43:26.648]   Labels:  app=guestbook
I0211 20:43:26.648]            tier=frontend
I0211 20:43:26.648]   Containers:
I0211 20:43:26.649]    php-redis:
I0211 20:43:26.649]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0211 20:43:26.676] Namespace:    namespace-1549917805-32512
I0211 20:43:26.676] Selector:     app=guestbook,tier=frontend
I0211 20:43:26.676] Labels:       app=guestbook
I0211 20:43:26.676]               tier=frontend
I0211 20:43:26.676] Annotations:  <none>
I0211 20:43:26.676] Replicas:     3 current / 3 desired
I0211 20:43:26.676] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 20:43:26.676] Pod Template:
I0211 20:43:26.677]   Labels:  app=guestbook
I0211 20:43:26.677]            tier=frontend
I0211 20:43:26.677]   Containers:
I0211 20:43:26.677]    php-redis:
I0211 20:43:26.677]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0211 20:43:26.829] Namespace:    namespace-1549917805-32512
I0211 20:43:26.829] Selector:     app=guestbook,tier=frontend
I0211 20:43:26.829] Labels:       app=guestbook
I0211 20:43:26.830]               tier=frontend
I0211 20:43:26.830] Annotations:  <none>
I0211 20:43:26.830] Replicas:     3 current / 3 desired
I0211 20:43:26.830] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 20:43:26.830] Pod Template:
I0211 20:43:26.830]   Labels:  app=guestbook
I0211 20:43:26.830]            tier=frontend
I0211 20:43:26.830]   Containers:
I0211 20:43:26.830]    php-redis:
I0211 20:43:26.831]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 20:43:26.949] Namespace:    namespace-1549917805-32512
I0211 20:43:26.949] Selector:     app=guestbook,tier=frontend
I0211 20:43:26.949] Labels:       app=guestbook
I0211 20:43:26.949]               tier=frontend
I0211 20:43:26.950] Annotations:  <none>
I0211 20:43:26.950] Replicas:     3 current / 3 desired
I0211 20:43:26.950] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 20:43:26.950] Pod Template:
I0211 20:43:26.950]   Labels:  app=guestbook
I0211 20:43:26.950]            tier=frontend
I0211 20:43:26.950]   Containers:
I0211 20:43:26.950]    php-redis:
I0211 20:43:26.950]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 20:43:27.061] Namespace:    namespace-1549917805-32512
I0211 20:43:27.062] Selector:     app=guestbook,tier=frontend
I0211 20:43:27.062] Labels:       app=guestbook
I0211 20:43:27.062]               tier=frontend
I0211 20:43:27.062] Annotations:  <none>
I0211 20:43:27.062] Replicas:     3 current / 3 desired
I0211 20:43:27.062] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 20:43:27.062] Pod Template:
I0211 20:43:27.062]   Labels:  app=guestbook
I0211 20:43:27.062]            tier=frontend
I0211 20:43:27.063]   Containers:
I0211 20:43:27.063]    php-redis:
I0211 20:43:27.063]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0211 20:43:27.175] Namespace:    namespace-1549917805-32512
I0211 20:43:27.175] Selector:     app=guestbook,tier=frontend
I0211 20:43:27.175] Labels:       app=guestbook
I0211 20:43:27.175]               tier=frontend
I0211 20:43:27.175] Annotations:  <none>
I0211 20:43:27.175] Replicas:     3 current / 3 desired
I0211 20:43:27.176] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 20:43:27.176] Pod Template:
I0211 20:43:27.176]   Labels:  app=guestbook
I0211 20:43:27.176]            tier=frontend
I0211 20:43:27.176]   Containers:
I0211 20:43:27.176]    php-redis:
I0211 20:43:27.176]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0211 20:43:28.060] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0211 20:43:28