This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2568 succeeded
Started2020-03-25 17:40
Elapsed30m10s
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/3e4c338b-12ef-4110-b419-16b71244fc92/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/3e4c338b-12ef-4110-b419-16b71244fc92/targets/test

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreScorePlugin 4.33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreScorePlugin$
=== RUN   TestPreScorePlugin
W0325 18:06:13.922910  113876 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0325 18:06:13.922938  113876 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0325 18:06:13.922951  113876 master.go:314] Node port range unspecified. Defaulting to 30000-32767.
I0325 18:06:13.922967  113876 master.go:270] Using reconciler: 
I0325 18:06:13.923107  113876 config.go:627] Not requested to run hook priority-and-fairness-config-consumer
I0325 18:06:13.924870  113876 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.925040  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.925136  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.921913  113876 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0325 18:06:13.926625  113876 store.go:1366] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0325 18:06:13.926691  113876 reflector.go:211] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0325 18:06:13.926756  113876 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.927154  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.927180  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.928063  113876 store.go:1366] Monitoring events count at <storage-prefix>//events
I0325 18:06:13.928123  113876 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.928222  113876 reflector.go:211] Listing and watching *core.Event from storage/cacher.go:/events
I0325 18:06:13.928251  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.928272  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.928925  113876 store.go:1366] Monitoring limitranges count at <storage-prefix>//limitranges
I0325 18:06:13.929099  113876 reflector.go:211] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0325 18:06:13.929263  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.929335  113876 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.929699  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.929736  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.929777  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.930959  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.931984  113876 store.go:1366] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0325 18:06:13.932673  113876 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.932045  113876 reflector.go:211] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0325 18:06:13.933074  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.933260  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.935860  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.936193  113876 store.go:1366] Monitoring secrets count at <storage-prefix>//secrets
I0325 18:06:13.936408  113876 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.936533  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.936570  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.936957  113876 reflector.go:211] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0325 18:06:13.940277  113876 store.go:1366] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0325 18:06:13.940339  113876 reflector.go:211] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0325 18:06:13.940506  113876 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.940652  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.940676  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.941438  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.941612  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.941644  113876 store.go:1366] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0325 18:06:13.941763  113876 reflector.go:211] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0325 18:06:13.941981  113876 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.942250  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.942280  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.942702  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.943137  113876 store.go:1366] Monitoring configmaps count at <storage-prefix>//configmaps
I0325 18:06:13.943396  113876 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.943553  113876 reflector.go:211] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0325 18:06:13.943653  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.944069  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.945743  113876 store.go:1366] Monitoring namespaces count at <storage-prefix>//namespaces
I0325 18:06:13.946034  113876 reflector.go:211] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0325 18:06:13.946277  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.946017  113876 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.946558  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.946585  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.947967  113876 store.go:1366] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0325 18:06:13.948119  113876 reflector.go:211] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0325 18:06:13.948232  113876 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.948791  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.948830  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.950397  113876 store.go:1366] Monitoring nodes count at <storage-prefix>//minions
I0325 18:06:13.950605  113876 reflector.go:211] Listing and watching *core.Node from storage/cacher.go:/minions
I0325 18:06:13.950935  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.951621  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.952986  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.953097  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.953518  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.954493  113876 store.go:1366] Monitoring pods count at <storage-prefix>//pods
I0325 18:06:13.954523  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.954631  113876 reflector.go:211] Listing and watching *core.Pod from storage/cacher.go:/pods
I0325 18:06:13.954910  113876 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.955205  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.955257  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.956116  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.956238  113876 store.go:1366] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0325 18:06:13.956523  113876 reflector.go:211] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0325 18:06:13.958266  113876 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.958750  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.959400  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.959350  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.960518  113876 store.go:1366] Monitoring services count at <storage-prefix>//services/specs
I0325 18:06:13.960593  113876 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.960760  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.960798  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.961089  113876 reflector.go:211] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0325 18:06:13.962122  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.962325  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.963368  113876 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.963567  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:13.963607  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:13.964656  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.967703  113876 store.go:1366] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0325 18:06:13.967732  113876 rest.go:113] the default service ipfamily for this cluster is: IPv4
I0325 18:06:13.967905  113876 reflector.go:211] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0325 18:06:13.969226  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:13.970008  113876 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.970327  113876 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.971262  113876 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.973192  113876 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.974340  113876 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.975304  113876 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.976776  113876 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.977061  113876 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.977297  113876 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.977892  113876 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.979870  113876 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.980238  113876 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.981208  113876 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.981658  113876 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.983705  113876 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.983997  113876 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.984686  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.984893  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.985159  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.985369  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.985575  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.985719  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.985909  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.988009  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.988372  113876 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.990605  113876 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.991642  113876 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.991933  113876 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.992221  113876 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.994400  113876 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.994899  113876 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.995775  113876 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.997909  113876 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:13.998848  113876 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.001046  113876 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.001493  113876 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.001614  113876 master.go:527] Skipping disabled API group "auditregistration.k8s.io".
I0325 18:06:14.001644  113876 master.go:538] Enabling API group "authentication.k8s.io".
I0325 18:06:14.001659  113876 master.go:538] Enabling API group "authorization.k8s.io".
I0325 18:06:14.001916  113876 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.002077  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.002105  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.002947  113876 store.go:1366] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0325 18:06:14.003005  113876 reflector.go:211] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0325 18:06:14.003219  113876 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.003397  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.003438  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.004322  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.004877  113876 store.go:1366] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0325 18:06:14.005101  113876 reflector.go:211] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0325 18:06:14.005365  113876 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.005528  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.005590  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.006600  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.006729  113876 reflector.go:211] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0325 18:06:14.006685  113876 store.go:1366] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0325 18:06:14.007048  113876 master.go:538] Enabling API group "autoscaling".
I0325 18:06:14.007407  113876 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.007582  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.007607  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.008885  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.009657  113876 store.go:1366] Monitoring jobs.batch count at <storage-prefix>//jobs
I0325 18:06:14.009829  113876 reflector.go:211] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0325 18:06:14.009957  113876 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.010322  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.010356  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.011191  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.011370  113876 store.go:1366] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0325 18:06:14.011493  113876 reflector.go:211] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0325 18:06:14.011498  113876 master.go:538] Enabling API group "batch".
I0325 18:06:14.011889  113876 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.012056  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.012163  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.013962  113876 store.go:1366] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0325 18:06:14.013990  113876 master.go:538] Enabling API group "certificates.k8s.io".
I0325 18:06:14.014122  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.014133  113876 reflector.go:211] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0325 18:06:14.014679  113876 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.014809  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.014827  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.015542  113876 store.go:1366] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0325 18:06:14.015624  113876 reflector.go:211] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0325 18:06:14.015772  113876 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.015631  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.016058  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.016165  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.016660  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.017449  113876 store.go:1366] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0325 18:06:14.017484  113876 master.go:538] Enabling API group "coordination.k8s.io".
I0325 18:06:14.017599  113876 reflector.go:211] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0325 18:06:14.017688  113876 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.017832  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.017860  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.018635  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.019010  113876 store.go:1366] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I0325 18:06:14.019037  113876 master.go:538] Enabling API group "discovery.k8s.io".
I0325 18:06:14.019098  113876 reflector.go:211] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I0325 18:06:14.019259  113876 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.019403  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.019421  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.020302  113876 store.go:1366] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0325 18:06:14.020330  113876 master.go:538] Enabling API group "extensions".
I0325 18:06:14.020497  113876 reflector.go:211] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0325 18:06:14.022128  113876 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.022285  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.022303  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.022487  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.023342  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.023648  113876 store.go:1366] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0325 18:06:14.023732  113876 reflector.go:211] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0325 18:06:14.024984  113876 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.025117  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.025135  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.025952  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.026484  113876 store.go:1366] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0325 18:06:14.027421  113876 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.026637  113876 reflector.go:211] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0325 18:06:14.027708  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.027735  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.028452  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.028490  113876 store.go:1366] Monitoring ingressclasses.networking.k8s.io count at <storage-prefix>//ingressclasses
I0325 18:06:14.028820  113876 master.go:538] Enabling API group "networking.k8s.io".
I0325 18:06:14.028516  113876 reflector.go:211] Listing and watching *networking.IngressClass from storage/cacher.go:/ingressclasses
I0325 18:06:14.029292  113876 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.029917  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.030075  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.030114  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.031246  113876 store.go:1366] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0325 18:06:14.031270  113876 master.go:538] Enabling API group "node.k8s.io".
I0325 18:06:14.031303  113876 reflector.go:211] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0325 18:06:14.031552  113876 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.031905  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.032067  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.032683  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.033466  113876 store.go:1366] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0325 18:06:14.033676  113876 reflector.go:211] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0325 18:06:14.033738  113876 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.033871  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.033898  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.035349  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.035404  113876 store.go:1366] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0325 18:06:14.035427  113876 master.go:538] Enabling API group "policy".
I0325 18:06:14.035489  113876 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.035657  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.035689  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.035868  113876 reflector.go:211] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0325 18:06:14.037127  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.037437  113876 store.go:1366] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0325 18:06:14.037545  113876 reflector.go:211] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0325 18:06:14.038254  113876 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.039818  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.040658  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.040797  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.041711  113876 store.go:1366] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0325 18:06:14.041792  113876 reflector.go:211] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0325 18:06:14.041834  113876 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.042005  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.042040  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.042850  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.043085  113876 store.go:1366] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0325 18:06:14.043185  113876 reflector.go:211] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0325 18:06:14.043381  113876 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.043541  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.043569  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.047026  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.047056  113876 store.go:1366] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0325 18:06:14.047159  113876 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.047349  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.047617  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.048178  113876 reflector.go:211] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0325 18:06:14.049372  113876 store.go:1366] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0325 18:06:14.049530  113876 reflector.go:211] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0325 18:06:14.049717  113876 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.049347  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.050053  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.050125  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.051418  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.052315  113876 store.go:1366] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0325 18:06:14.052383  113876 reflector.go:211] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0325 18:06:14.052389  113876 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.052869  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.052902  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.053338  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.053937  113876 store.go:1366] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0325 18:06:14.054211  113876 reflector.go:211] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0325 18:06:14.054591  113876 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.054811  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.054949  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.055270  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.055998  113876 store.go:1366] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0325 18:06:14.056031  113876 master.go:538] Enabling API group "rbac.authorization.k8s.io".
I0325 18:06:14.056213  113876 reflector.go:211] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0325 18:06:14.058210  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.060070  113876 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.060259  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.060296  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.062726  113876 store.go:1366] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0325 18:06:14.063050  113876 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.063343  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.063381  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.063735  113876 reflector.go:211] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0325 18:06:14.064404  113876 store.go:1366] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0325 18:06:14.064478  113876 master.go:538] Enabling API group "scheduling.k8s.io".
I0325 18:06:14.064588  113876 reflector.go:211] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0325 18:06:14.064646  113876 master.go:527] Skipping disabled API group "settings.k8s.io".
I0325 18:06:14.064880  113876 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.065025  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.065050  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.065642  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.066190  113876 store.go:1366] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0325 18:06:14.066255  113876 reflector.go:211] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0325 18:06:14.066434  113876 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.066608  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.066640  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.067673  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.068552  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.069180  113876 store.go:1366] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0325 18:06:14.069422  113876 reflector.go:211] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0325 18:06:14.069724  113876 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.070340  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.070377  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.071669  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.072775  113876 store.go:1366] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0325 18:06:14.072965  113876 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.073080  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.073109  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.073391  113876 reflector.go:211] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0325 18:06:14.074601  113876 store.go:1366] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0325 18:06:14.074825  113876 reflector.go:211] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0325 18:06:14.074678  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.075477  113876 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.076027  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.076602  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.076273  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.077536  113876 store.go:1366] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0325 18:06:14.077653  113876 reflector.go:211] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0325 18:06:14.077953  113876 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.078232  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.078263  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.080260  113876 store.go:1366] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0325 18:06:14.080535  113876 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.080796  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.080823  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.080605  113876 reflector.go:211] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0325 18:06:14.082977  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.083161  113876 store.go:1366] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0325 18:06:14.083286  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.083646  113876 reflector.go:211] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0325 18:06:14.083729  113876 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.083894  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.083936  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.084472  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.085082  113876 store.go:1366] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0325 18:06:14.085181  113876 master.go:538] Enabling API group "storage.k8s.io".
I0325 18:06:14.085241  113876 master.go:527] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I0325 18:06:14.085337  113876 reflector.go:211] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0325 18:06:14.085660  113876 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.085869  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.085920  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.086459  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.087325  113876 store.go:1366] Monitoring deployments.apps count at <storage-prefix>//deployments
I0325 18:06:14.087392  113876 reflector.go:211] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0325 18:06:14.087616  113876 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.087791  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.087813  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.088449  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.089507  113876 store.go:1366] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0325 18:06:14.089682  113876 reflector.go:211] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0325 18:06:14.089752  113876 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.089952  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.089984  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.092129  113876 store.go:1366] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0325 18:06:14.092155  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.092241  113876 reflector.go:211] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0325 18:06:14.092556  113876 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.092803  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.092833  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.094046  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.095562  113876 store.go:1366] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0325 18:06:14.095689  113876 reflector.go:211] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0325 18:06:14.096103  113876 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.096331  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.096379  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.096879  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.098600  113876 store.go:1366] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0325 18:06:14.098639  113876 master.go:538] Enabling API group "apps".
I0325 18:06:14.098715  113876 reflector.go:211] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0325 18:06:14.098888  113876 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.099179  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.099208  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.099759  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.100476  113876 store.go:1366] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0325 18:06:14.100706  113876 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.100844  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.100865  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.100889  113876 reflector.go:211] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0325 18:06:14.102009  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.102248  113876 store.go:1366] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0325 18:06:14.102309  113876 reflector.go:211] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0325 18:06:14.102617  113876 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.102770  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.102796  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.103291  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.103752  113876 store.go:1366] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0325 18:06:14.103974  113876 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.104100  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.104119  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.104127  113876 reflector.go:211] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0325 18:06:14.105105  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.105628  113876 store.go:1366] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0325 18:06:14.105756  113876 master.go:538] Enabling API group "admissionregistration.k8s.io".
I0325 18:06:14.105729  113876 reflector.go:211] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0325 18:06:14.105871  113876 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.107281  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.107696  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.107726  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:14.108499  113876 store.go:1366] Monitoring events count at <storage-prefix>//events
I0325 18:06:14.108522  113876 master.go:538] Enabling API group "events.k8s.io".
I0325 18:06:14.108823  113876 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.109227  113876 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.109582  113876 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.111086  113876 reflector.go:211] Listing and watching *core.Event from storage/cacher.go:/events
I0325 18:06:14.112362  113876 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.112554  113876 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.112678  113876 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.112970  113876 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.113101  113876 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.113456  113876 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.113634  113876 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.114834  113876 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.117395  113876 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.118633  113876 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.119102  113876 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.121462  113876 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.121906  113876 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.122642  113876 watch_cache.go:449] Replace watchCache (rev: 34749) 
I0325 18:06:14.123971  113876 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.125445  113876 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.126319  113876 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.126682  113876 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0325 18:06:14.126742  113876 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
I0325 18:06:14.128375  113876 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.128578  113876 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.128877  113876 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.129705  113876 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.131359  113876 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.132114  113876 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0325 18:06:14.132183  113876 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I0325 18:06:14.133706  113876 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.134059  113876 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.135013  113876 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.136490  113876 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.137287  113876 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.137623  113876 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.138350  113876 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0325 18:06:14.138445  113876 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0325 18:06:14.140171  113876 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.140505  113876 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.141340  113876 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.143206  113876 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.143682  113876 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.145133  113876 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.145859  113876 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.146565  113876 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.147992  113876 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.148806  113876 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.149402  113876 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0325 18:06:14.149462  113876 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0325 18:06:14.151281  113876 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.152172  113876 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0325 18:06:14.152268  113876 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0325 18:06:14.152927  113876 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.155516  113876 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.157505  113876 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.158489  113876 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.158963  113876 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.163889  113876 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.176178  113876 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.176957  113876 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.179096  113876 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0325 18:06:14.179197  113876 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0325 18:06:14.180261  113876 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.182050  113876 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.182449  113876 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.183152  113876 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.183415  113876 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.183694  113876 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.185457  113876 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.185786  113876 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.186066  113876 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.187853  113876 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.188176  113876 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.188466  113876 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0325 18:06:14.188528  113876 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0325 18:06:14.188543  113876 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0325 18:06:14.189352  113876 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.191164  113876 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.192020  113876 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.192549  113876 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.194410  113876 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.198720  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.198765  113876 healthz.go:186] healthz check poststarthook/bootstrap-controller failed: not finished
I0325 18:06:14.198777  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.198790  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.198801  113876 healthz.go:186] healthz check poststarthook/start-cluster-authentication-info-controller failed: not finished
I0325 18:06:14.198818  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/start-cluster-authentication-info-controller failed: reason withheld
healthz check failed
W0325 18:06:14.198739  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0325 18:06:14.198903  113876 httplog.go:90] verb="GET" URI="/healthz" latency=356.673µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.199046  113876 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0325 18:06:14.199261  113876 shared_informer.go:225] Waiting for caches to sync for cluster_authentication_trust_controller
I0325 18:06:14.199512  113876 reflector.go:175] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0325 18:06:14.199560  113876 reflector.go:211] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0325 18:06:14.200258  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0" latency=351.271µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.200450  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.891462ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33768": 
I0325 18:06:14.202952  113876 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=34749 labels= fields= timeout=8m34s
I0325 18:06:14.203706  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.184478ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.208528  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.251215ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.211025  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.211091  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.211105  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.211114  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.211170  113876 httplog.go:90] verb="GET" URI="/healthz" latency=253.424µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.213123  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.386615ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:14.213245  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.778652ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.214440  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.464237ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33772": 
I0325 18:06:14.216031  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=847.799µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33772": 
I0325 18:06:14.217244  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=2.40421ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.218923  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.155676ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.218929  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=2.241794ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33772": 
I0325 18:06:14.221394  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.967585ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.222956  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency=1.036842ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.225299  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.660546ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.226951  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.229853ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.228604  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.226371ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.230096  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency=1.092831ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.299533  113876 shared_informer.go:255] caches populated
I0325 18:06:14.299595  113876 shared_informer.go:232] Caches are synced for cluster_authentication_trust_controller 
I0325 18:06:14.300071  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.300107  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.300118  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.300129  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.300218  113876 httplog.go:90] verb="GET" URI="/healthz" latency=287.359µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.312020  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.312070  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.312084  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.312094  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.312179  113876 httplog.go:90] verb="GET" URI="/healthz" latency=300.795µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.400106  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.400150  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.400162  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.400171  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.400255  113876 httplog.go:90] verb="GET" URI="/healthz" latency=306.282µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.412035  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.412078  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.412092  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.412109  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.412180  113876 httplog.go:90] verb="GET" URI="/healthz" latency=320.563µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.500631  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.500675  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.500688  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.500696  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.500782  113876 httplog.go:90] verb="GET" URI="/healthz" latency=308.614µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.511956  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.511996  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.512016  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.512026  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.512103  113876 httplog.go:90] verb="GET" URI="/healthz" latency=271.538µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.600072  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.600121  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.600134  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.600143  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.600212  113876 httplog.go:90] verb="GET" URI="/healthz" latency=267.135µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.612032  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.612070  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.612090  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.612100  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.612172  113876 httplog.go:90] verb="GET" URI="/healthz" latency=287.218µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.700105  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.700145  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.700158  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.700168  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.700244  113876 httplog.go:90] verb="GET" URI="/healthz" latency=254.161µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.712059  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.712100  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.712115  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.712124  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.712204  113876 httplog.go:90] verb="GET" URI="/healthz" latency=265.187µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.800183  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.800227  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.800244  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.800254  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.800352  113876 httplog.go:90] verb="GET" URI="/healthz" latency=325.517µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.811965  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.812009  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.812023  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.812046  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.812160  113876 httplog.go:90] verb="GET" URI="/healthz" latency=318.934µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.900252  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.900313  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.900355  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.900369  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.900534  113876 httplog.go:90] verb="GET" URI="/healthz" latency=476.271µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.912033  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.912071  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.912093  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.912113  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.912192  113876 httplog.go:90] verb="GET" URI="/healthz" latency=291µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.923017  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.923111  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:15.001371  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.001425  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.001436  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.001534  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.542233ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:15.014379  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.014426  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.014437  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.014537  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.493634ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.101368  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.101400  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.101411  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.101493  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.454922ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:15.112956  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.112993  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.113005  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.113087  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.213711ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.200353  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.464235ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.201384  113876 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical" latency=1.618765ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.202351  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.202376  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.202385  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.202445  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.244844ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:15.202683  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.51621ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33936": 
I0325 18:06:15.204567  113876 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=2.523773ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.205105  113876 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0325 18:06:15.205249  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=1.823547ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.207159  113876 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical" latency=1.407419ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.207511  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=1.384805ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.209605  113876 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=1.948076ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.209840  113876 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0325 18:06:15.209867  113876 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0325 18:06:15.212047  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=4.032644ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.212595  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.212624  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.212678  113876 httplog.go:90] verb="GET" URI="/healthz" latency=936.41µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.213529  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=932.425µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.215144  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=1.083216ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.216692  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=996.313µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.218035  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=885.384µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.219728  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin" latency=936.006µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.222112  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.889157ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.222736  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0325 18:06:15.225027  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery" latency=1.833044ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.230582  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=4.913982ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.232060  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0325 18:06:15.233814  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user" latency=1.536569ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.237787  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.432209ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.238053  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0325 18:06:15.244524  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer" latency=6.051779ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.252529  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=7.163047ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.253013  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0325 18:06:15.254965  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=1.267712ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.259127  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.577672ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.259386  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/admin
I0325 18:06:15.263169  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=3.530902ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.268005  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.254509ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.268320  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/edit
I0325 18:06:15.272323  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=1.494158ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.274966  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.065195ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.275194  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/view
I0325 18:06:15.276650  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=1.180505ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.289085  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=11.90855ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.289343  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0325 18:06:15.291129  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=1.51264ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.297933  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=6.124776ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.298430  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0325 18:06:15.299877  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=1.160579ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.301763  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.301877  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.301987  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.079057ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.304785  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=4.272378ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.305104  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0325 18:06:15.306497  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster" latency=1.078261ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.309910  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.861179ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.310397  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0325 18:06:15.311919  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node" latency=1.302586ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.313145  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.313175  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.313229  113876 httplog.go:90] verb="GET" URI="/healthz" latency=920.38µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.315058  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.263053ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.315408  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node
I0325 18:06:15.318247  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector" latency=2.374986ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.320864  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.068644ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.321222  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0325 18:06:15.322796  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin" latency=1.304715ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.327142  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.810706ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.327468  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0325 18:06:15.329579  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper" latency=1.839429ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.333238  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.986987ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.333487  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0325 18:06:15.334799  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator" latency=1.052214ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.338333  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.947508ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.338639  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0325 18:06:15.339999  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator" latency=1.034134ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.342698  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.159968ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.343089  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0325 18:06:15.345221  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager" latency=1.770869ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.348205  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.276712ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.348586  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0325 18:06:15.350042  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns" latency=1.190304ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.361350  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=10.743093ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.361658  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0325 18:06:15.363378  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner" latency=1.38291ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.366411  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.224751ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.366710  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0325 18:06:15.368369  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient" latency=1.259196ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.371716  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.820016ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.371956  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0325 18:06:15.374364  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" latency=2.108623ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.376737  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.896529ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.376987  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0325 18:06:15.378419  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler" latency=1.197662ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.388519  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=9.621046ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.388907  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0325 18:06:15.390283  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:legacy-unknown-approver" latency=1.11534ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.394493  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.487511ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.395066  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver
I0325 18:06:15.396802  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kubelet-serving-approver" latency=1.244663ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.399668  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.929799ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.400008  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver
I0325 18:06:15.401863  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.401925  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.402125  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.952585ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.402935  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver" latency=1.080938ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.405444  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.998535ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.405691  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-approver
I0325 18:06:15.407298  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver" latency=1.327853ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.410309  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.309329ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.410619  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
I0325 18:06:15.411959  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier" latency=1.04871ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.412735  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.412767  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.412844  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.029288ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.414894  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.282948ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.415249  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0325 18:06:15.416461  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler" latency=972.715µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.419847  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.633148ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.420192  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0325 18:06:15.421556  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller" latency=1.022227ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.424807  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.895758ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.425345  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0325 18:06:15.427179  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller" latency=1.450567ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.429921  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.105798ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.430525  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0325 18:06:15.435330  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller" latency=4.193999ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.439324  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.146672ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.439858  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0325 18:06:15.441908  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller" latency=1.617369ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.444940  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.27389ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.445268  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0325 18:06:15.447103  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller" latency=1.507569ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.450102  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.411116ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.450550  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0325 18:06:15.452085  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller" latency=1.208764ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.455566  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.729355ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.455909  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0325 18:06:15.459145  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller" latency=2.780286ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.481012  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=21.008414ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.481475  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0325 18:06:15.486854  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller" latency=1.871974ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.490577  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.14492ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.491154  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0325 18:06:15.503174  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.503214  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.503295  113876 httplog.go:90] verb="GET" URI="/healthz" latency=3.24515ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.504537  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller" latency=11.946597ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.508149  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.956455ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.508462  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0325 18:06:15.509875  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector" latency=1.122594ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.513029  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.600179ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.513277  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0325 18:06:15.514638  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler" latency=1.058614ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.518384  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.386559ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.519043  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0325 18:06:15.519983  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.520042  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.520110  113876 httplog.go:90] verb="GET" URI="/healthz" latency=8.40655ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.521256  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller" latency=1.681137ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.523895  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.045744ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.524235  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0325 18:06:15.543494  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller" latency=18.892916ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.552010  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=7.843376ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.552383  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0325 18:06:15.559181  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller" latency=6.498326ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.562311  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.377535ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.562843  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0325 18:06:15.564897  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder" latency=1.456997ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.568469  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.80795ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.568804  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0325 18:06:15.570479  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector" latency=1.438966ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.574108  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.020939ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.574512  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0325 18:06:15.575901  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller" latency=1.115321ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.578622  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.147515ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.578947  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0325 18:06:15.582017  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller" latency=2.810855ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.585582  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.891861ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.586113  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0325 18:06:15.587838  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller" latency=1.274862ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.590571  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.119815ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.590848  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0325 18:06:15.592454  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller" latency=1.191516ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.595208  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.168242ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.595510  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0325 18:06:15.596899  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller" latency=1.083215ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.599030  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.644237ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.599321  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0325 18:06:15.600754  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller" latency=1.13402ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.600977  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.601004  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.601059  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.249791ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.603220  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.762156ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.603525  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0325 18:06:15.605031  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller" latency=1.277641ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.607935  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.413755ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.608295  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0325 18:06:15.611196  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller" latency=2.624606ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.612580  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.612610  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.612665  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.005448ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.614130  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.314162ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.614424  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0325 18:06:15.615891  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller" latency=1.18408ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.620233  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.868349ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.620521  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0325 18:06:15.621899  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller" latency=1.10315ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.624148  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.691127ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.624495  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0325 18:06:15.625888  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller" latency=1.069227ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.628901  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.262754ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.629445  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0325 18:06:15.640222  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin" latency=1.412523ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.661854  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.755347ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.662138  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0325 18:06:15.680413  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=1.45969ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.701149  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.190598ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.701455  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0325 18:06:15.701503  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.701530  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.701610  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.752155ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.713030  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.713068  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.713155  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.255793ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.720390  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user" latency=1.452376ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.741575  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.513826ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.741980  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0325 18:06:15.760663  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" latency=1.683695ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.781414  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.391833ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.781958  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0325 18:06:15.800364  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier" latency=1.400704ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.801002  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.801184  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.801268  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.395311ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:15.813291  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.813323  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.813398  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.55053ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.821495  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.639775ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.821832  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0325 18:06:15.840827  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager" latency=1.739844ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.862101  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.185912ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.862448  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0325 18:06:15.880541  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns" latency=1.405658ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.901637  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.901693  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.901775  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.916975ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.901882  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.925851ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.902267  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0325 18:06:15.913147  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.913190  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.913278  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.388353ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.920105  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler" latency=1.267444ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.941333  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.357447ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.941637  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0325 18:06:15.960567  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler" latency=1.648367ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.981289  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.362237ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.981689  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0325 18:06:16.000435  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node" latency=1.507249ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.001143  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.001175  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.001244  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.143277ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:16.013027  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.013061  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.013218  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.365082ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.021066  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.130512ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.021330  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0325 18:06:16.040426  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller" latency=1.425782ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.061542  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.59992ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.061947  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0325 18:06:16.080771  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller" latency=1.862182ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.101453  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.101490  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.101562  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.688211ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.102728  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.707782ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.103172  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0325 18:06:16.113042  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.113089  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.113164  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.331164ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.120308  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller" latency=1.402094ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.141119  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.174789ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.141400  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0325 18:06:16.160578  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller" latency=1.603594ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.183140  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.061485ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.183464  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0325 18:06:16.200589  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller" latency=1.615363ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.201836  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.201876  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.201965  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.498518ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.213163  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.213197  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.213285  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.453425ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.221775  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.812814ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.222471  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0325 18:06:16.241762  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller" latency=2.738434ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.269683  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=8.6318ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.270099  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0325 18:06:16.280531  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller" latency=1.402663ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.303498  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.303531  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.303616  113876 httplog.go:90] verb="GET" URI="/healthz" latency=3.664752ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.305359  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.42273ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.305736  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0325 18:06:16.313084  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.313123  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.313218  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.425231ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.320824  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller" latency=1.592968ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.342733  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.648851ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.343181  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0325 18:06:16.361743  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller" latency=1.169866ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.381588  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.591798ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.381936  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0325 18:06:16.400426  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector" latency=1.510886ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.404531  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.404586  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.404690  113876 httplog.go:90] verb="GET" URI="/healthz" latency=4.830031ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.412744  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.412782  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.412871  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.058503ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.421400  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.500271ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.421951  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0325 18:06:16.440576  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler" latency=1.285472ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.461738  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.765701ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.462179  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0325 18:06:16.480695  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller" latency=1.653391ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.509378  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=6.979395ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.509624  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.509657  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.509741  113876 httplog.go:90] verb="GET" URI="/healthz" latency=7.365181ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.509917  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0325 18:06:16.513376  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.513436  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.513552  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.609677ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.522401  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller" latency=2.896672ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.541152  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.166293ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.541453  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0325 18:06:16.560118  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller" latency=1.215781ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.581620  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.69229ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.581972  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0325 18:06:16.603814  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.603853  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.603928  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.301152ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.604997  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder" latency=1.209936ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.612846  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.612879  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.612966  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.156396ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.621309  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.379862ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.621641  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0325 18:06:16.642357  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector" latency=1.310899ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.662430  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.689742ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.662923  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0325 18:06:16.681525  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller" latency=2.56403ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.701890  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.701942  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.701966  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.030866ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.702015  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.944638ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.702252  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0325 18:06:16.715502  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.715537  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.715622  113876 httplog.go:90] verb="GET" URI="/healthz" latency=3.724037ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.720043  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller" latency=1.179405ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.744358  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=5.218372ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.744699  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0325 18:06:16.760383  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller" latency=1.459174ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.781772  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.799011ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.782320  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0325 18:06:16.800433  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller" latency=1.508097ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.801740  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.801766  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.801836  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.882595ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:16.812912  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.812971  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.813054  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.267576ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.822418  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.455737ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.822720  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0325 18:06:16.840327  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller" latency=1.388308ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.862072  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.337656ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.862634  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0325 18:06:16.880468  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller" latency=1.505442ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.901851  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.901880  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.901956  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.654169ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.902300  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.293697ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.902722  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0325 18:06:16.913070  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.913110  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.913187  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.337202ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.926050  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller" latency=7.161835ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.942367  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.390931ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.942831  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0325 18:06:16.960359  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller" latency=1.415105ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.981659  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.741132ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.981967  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0325 18:06:17.000315  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller" latency=1.414011ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.000958  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.000990  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.001056  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.122643ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:17.014562  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.014601  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.014691  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.318025ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.020960  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.030649ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.021411  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0325 18:06:17.040844  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller" latency=1.922883ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.061183  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.192293ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.061755  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0325 18:06:17.080386  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller" latency=1.460502ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.101244  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.309751ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.101926  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.101955  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.102022  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.121983ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:17.102045  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0325 18:06:17.113174  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.113211  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.113300  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.391882ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.120361  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader" latency=1.392532ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.122606  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.664578ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.142191  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=3.173982ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.142531  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0325 18:06:17.160824  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer" latency=1.764966ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.163172  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.608603ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.181714  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.759325ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.182120  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0325 18:06:17.200650  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider" latency=1.654447ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.201066  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.201124  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.201191  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.378335ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:17.203149  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.482911ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.213177  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.213231  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.213316  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.330321ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.221306  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.384754ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.221912  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0325 18:06:17.242410  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner" latency=3.372971ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.245646  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=2.419046ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.261659  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.679611ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.261979  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0325 18:06:17.280402  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager" latency=1.433076ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.282691  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.848759ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.301784  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.847583ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.302830  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.302865  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.302919  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0325 18:06:17.302938  113876 httplog.go:90] verb="GET" URI="/healthz" latency=3.011674ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:17.314655  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.314700  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.314841  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.853428ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.320271  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler" latency=1.368417ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.322935  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.911103ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.342440  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.425827ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.343230  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0325 18:06:17.360456  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer" latency=1.50494ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.364771  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=3.731456ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.381342  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles" latency=2.299773ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.381628  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0325 18:06:17.400520  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader" latency=1.441672ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.402092  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.402135  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.402225  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.250815ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:17.403147  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.665432ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.413127  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.413166  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.413264  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.385675ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.422346  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=3.256544ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.422752  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0325 18:06:17.443928  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager" latency=4.998594ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.446394  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.606921ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.461543  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.574234ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.461900  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0325 18:06:17.480391  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler" latency=1.435686ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.482629  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.649343ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.501826  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.502007  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.502927  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.612405ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.503067  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.786138ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:17.503697  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0325 18:06:17.512944  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.512991  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.513069  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.267074ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.520521  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer" latency=1.490413ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.523454  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.933705ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.542841  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=3.869114ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.543418  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0325 18:06:17.560807  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider" latency=1.815373ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.564265  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.723239ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.585515  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=6.533113ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.586092  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0325 18:06:17.600912  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner" latency=1.778976ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.602252  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.602296  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.602364  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.607245ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:17.604026  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.686634ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.613134  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.613183  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.613266  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.441167ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.621434  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.384309ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.621780  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0325 18:06:17.640578  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer" latency=1.525027ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.643171  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.988098ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.661201  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings" latency=2.223238ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.661531  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0325 18:06:17.701026  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.086672ms resp=200 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
W0325 18:06:17.701879  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0325 18:06:17.701917  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0325 18:06:17.701942  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0325 18:06:17.701990  113876 factory.go:224] Creating scheduler from algorithm provider 'DefaultProvider'
I0325 18:06:17.702006  113876 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0325 18:06:17.702018  113876 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0325 18:06:17.702120  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0325 18:06:17.702407  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0325 18:06:17.702437  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0325 18:06:17.702457  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0325 18:06:17.702544  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0325 18:06:17.702918  113876 reflector.go:175] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.702940  113876 reflector.go:211] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.702976  113876 reflector.go:175] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.702986  113876 reflector.go:211] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.703141  113876 reflector.go:175] Starting reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.703156  113876 reflector.go:211] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.702946  113876 reflector.go:175] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.703205  113876 reflector.go:211] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.703267  113876 reflector.go:175] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.703280  113876 reflector.go:211] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.703602  113876 reflector.go:175] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.703615  113876 reflector.go:211] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.704253  113876 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0" latency=438.625µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.704271  113876 httplog.go:90] verb="GET" URI="/api/v1/pods?limit=500&resourceVersion=0" latency=343.456µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34332": 
I0325 18:06:17.706967  113876 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0" latency=1.645737ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.707382  113876 get.go:251] Starting watch for /apis/storage.k8s.io/v1/csinodes, rv=34749 labels= fields= timeout=7m58s
I0325 18:06:17.707543  113876 httplog.go:90] verb="GET" URI="/api/v1/nodes?limit=500&resourceVersion=0" latency=454.792µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.707849  113876 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=34749 labels= fields= timeout=6m57s
I0325 18:06:17.708098  113876 httplog.go:90] verb="GET" URI="/api/v1/services?limit=500&resourceVersion=0" latency=366.173µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34332": 
I0325 18:06:17.708224  113876 get.go:251] Starting watch for /api/v1/nodes, rv=34749 labels= fields= timeout=8m2s
I0325 18:06:17.708777  113876 get.go:251] Starting watch for /api/v1/services, rv=34749 labels= fields= timeout=9m31s
I0325 18:06:17.709290  113876 get.go:251] Starting watch for /api/v1/pods, rv=34749 labels= fields= timeout=5m0s
I0325 18:06:17.709775  113876 reflector.go:175] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.709795  113876 reflector.go:211] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.709870  113876 reflector.go:175] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.709881  113876 reflector.go:211] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I0325 18:06:17.710578  113876 httplog.go:90] verb="GET" URI="/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0" latency=6.177165ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34334": 
I0325 18:06:17.711999  113876 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=34749 labels= fields= timeout=5m20s
I0325 18:06:17.712937  113876 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumes?limit=500&resourceVersion=0" latency=803.135µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.714460  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.537212ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34350": 
I0325 18:06:17.715433  113876 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0" latency=1.943964ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34352": 
I0325 18:06:17.715502  113876 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=34749 labels= fields= timeout=8m49s
I0325 18:06:17.716207  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default" latency=1.167171ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.716259  113876 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=34749 labels= fields= timeout=5m5s
I0325 18:06:17.719679  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=2.915658ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.721589  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/services/kubernetes" latency=1.46847ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.726795  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces/default/services" latency=4.634723ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.729159  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.882708ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.732324  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces/default/endpoints" latency=2.493454ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.734434  113876 httplog.go:90] verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=1.52258ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.738648  113876 httplog.go:90] verb="POST" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices" latency=3.53207ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.802799  113876 shared_informer.go:255] caches populated
I0325 18:06:17.802842  113876 shared_informer.go:255] caches populated
I0325 18:06:17.802850  113876 shared_informer.go:255] caches populated
I0325 18:06:17.802856  113876 shared_informer.go:255] caches populated
I0325 18:06:17.802863  113876 shared_informer.go:255] caches populated
I0325 18:06:17.802874  113876 shared_informer.go:255] caches populated
I0325 18:06:17.802881  113876 shared_informer.go:255] caches populated
I0325 18:06:17.802887  113876 shared_informer.go:255] caches populated
I0325 18:06:17.803184  113876 shared_informer.go:255] caches populated
I0325 18:06:17.806822  113876 httplog.go:90] verb="POST" URI="/api/v1/nodes" latency=3.471134ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.807264  113876 node_tree.go:86] Added node "test-node-0" in group "" to NodeTree
I0325 18:06:17.807285  113876 eventhandlers.go:104] add event for node "test-node-0"
I0325 18:06:17.809957  113876 httplog.go:90] verb="POST" URI="/api/v1/nodes" latency=2.433258ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.813141  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods" latency=2.377197ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.813530  113876 eventhandlers.go:173] add event for unscheduled pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
I0325 18:06:17.813574  113876 scheduling_queue.go:810] About to try and schedule pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
I0325 18:06:17.813586  113876 scheduler.go:578] Attempting to schedule pod: pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
I0325 18:06:17.813765  113876 scheduler_binder.go:323] AssumePodVolumes for pod "pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod", node "test-node-0"
I0325 18:06:17.813792  113876 scheduler_binder.go:333] AssumePodVolumes for pod "pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0325 18:06:17.813920  113876 default_binder.go:51] Attempting to bind pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod to test-node-0
I0325 18:06:17.819210  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods/test-pod/binding" latency=4.886351ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.819478  113876 scheduler.go:740] pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible.
I0325 18:06:17.819714  113876 node_tree.go:86] Added node "test-node-1" in group "" to NodeTree
I0325 18:06:17.819744  113876 eventhandlers.go:104] add event for node "test-node-1"
I0325 18:06:17.821488  113876 eventhandlers.go:205] delete event for unscheduled pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
I0325 18:06:17.821554  113876 eventhandlers.go:229] add event for scheduled pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod 
I0325 18:06:17.823415  113876 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/events" latency=3.502944ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.916571  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods/test-pod" latency=2.523372ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.930322  113876 eventhandlers.go:278] delete event for scheduled pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod 
I0325 18:06:17.931303  113876 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods/test-pod" latency=13.870011ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.937343  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods/test-pod" latency=2.253518ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.940653  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods" latency=2.703817ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:17.940652  113876 eventhandlers.go:173] add event for unscheduled pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
I0325 18:06:17.940766  113876 scheduling_queue.go:810] About to try and schedule pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
I0325 18:06:17.940784  113876 scheduler.go:578] Attempting to schedule pod: pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
W0325 18:06:17.940994  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0325 18:06:17.941040  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0325 18:06:17.941051  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
E0325 18:06:17.941085  113876 framework.go:481] error while running "prescore-plugin" prescore plugin for pod "test-pod": injecting failure for pod test-pod
E0325 18:06:17.941107  113876 scheduler.go:608] error selecting node for pod: error while running "prescore-plugin" prescore plugin for pod "test-pod": injecting failure for pod test-pod
E0325 18:06:17.941131  113876 factory.go:482] Error scheduling pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod: error while running "prescore-plugin" prescore plugin for pod "test-pod": injecting failure for pod test-pod; retrying
I0325 18:06:17.941158  113876 scheduler.go:785] Updating pod condition for pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod to (PodScheduled==False, Reason=Unschedulable)
I0325 18:06:17.944493  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods/test-pod" latency=1.926179ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34462": 
I0325 18:06:17.945800  113876 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/events" latency=3.217575ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34466": 
I0325 18:06:17.947270  113876 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods/test-pod/status" latency=4.742655ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34348": 
I0325 18:06:18.052448  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods/test-pod" latency=10.701984ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34466": 
I0325 18:06:18.146863  113876 scheduling_queue.go:810] About to try and schedule pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
I0325 18:06:18.146908  113876 scheduler.go:766] Skip schedule deleting pod: pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
I0325 18:06:18.156573  113876 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/events" latency=9.155861ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34462": 
I0325 18:06:18.171881  113876 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods/test-pod" latency=118.73005ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34466": 
I0325 18:06:18.172566  113876 eventhandlers.go:205] delete event for unscheduled pod pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod
I0325 18:06:18.176763  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/pods/test-pod" latency=1.783538ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34466": 
I0325 18:06:18.177746  113876 reflector.go:181] Stopping reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:18.177779  113876 reflector.go:181] Stopping reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:18.177794  113876 reflector.go:181] Stopping reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:18.177811  113876 reflector.go:181] Stopping reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:18.177828  113876 reflector.go:181] Stopping reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:18.177843  113876 reflector.go:181] Stopping reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:18.177860  113876 reflector.go:181] Stopping reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:18.177886  113876 reflector.go:181] Stopping reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I0325 18:06:18.178249  113876 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=34749&timeout=8m49s&timeoutSeconds=529&watch=true" latency=463.212355ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34350": 
I0325 18:06:18.178288  113876 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=34749&timeout=6m57s&timeoutSeconds=417&watch=true" latency=470.581734ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34342": 
I0325 18:06:18.178473  113876 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=34749&timeout=5m5s&timeoutSeconds=305&watch=true" latency=462.397165ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34352": 
I0325 18:06:18.178505  113876 httplog.go:90] verb="GET" URI="/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=34749&timeout=5m20s&timeoutSeconds=320&watch=true" latency=466.650436ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34334": 
I0325 18:06:18.178629  113876 httplog.go:90] verb="GET" URI="/api/v1/services?allowWatchBookmarks=true&resourceVersion=34749&timeout=9m31s&timeoutSeconds=571&watch=true" latency=470.065459ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34332": 
I0325 18:06:18.178647  113876 httplog.go:90] verb="GET" URI="/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=34749&timeout=8m2s&timeoutSeconds=482&watch=true" latency=470.644043ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:18.178816  113876 httplog.go:90] verb="GET" URI="/api/v1/pods?allowWatchBookmarks=true&resourceVersion=34749&timeout=5m0s&timeoutSeconds=300&watch=true" latency=469.735064ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34338": 
I0325 18:06:18.182921  113876 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=34749&timeout=7m58s&timeoutSeconds=478&watch=true" latency=470.914789ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34340": 
I0325 18:06:18.233612  113876 httplog.go:90] verb="DELETE" URI="/api/v1/nodes" latency=54.797752ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34466": 
I0325 18:06:18.233889  113876 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0325 18:06:18.236836  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=2.503467ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34466": 
I0325 18:06:18.246681  113876 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=9.14255ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34466": 
I0325 18:06:18.251929  113876 httplog.go:90] verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=4.713768ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34466": 
I0325 18:06:18.255781  113876 httplog.go:90] verb="PUT" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=3.195967ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34466": 
I0325 18:06:18.256348  113876 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0325 18:06:18.256492  113876 reflector.go:181] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0325 18:06:18.256644  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=34749&timeout=8m34s&timeoutSeconds=514&watch=true" latency=4.053953387s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33768": 
--- FAIL: TestPreScorePlugin (4.33s)
    framework_test.go:1475: Expected the pre-score plugin to be called.

				from junit_20200325-175743.xml

Find pre-score-plugin775e7a53-3514-4c59-a3c1-d14f02d4c48a/test-pod mentions in log files | View test history on testgrid


Show 2568 Passed Tests

Show 6 Skipped Tests

Error lines from build-log.txt

... skipping 50 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [0325 17:45:37] Call tree:
!!! [0325 17:45:37]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0325 17:45:37]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0325 17:45:37]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [0325 17:45:37]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [0325 17:45:37]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0325 17:45:37] Running kubeadm tests
warning: ignoring symlink /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes
go: warning: "k8s.io/kubernetes/vendor/github.com/go-bindata/go-bindata/..." matched no packages
+++ [0325 17:45:43] Building go targets for linux/amd64:
    cmd/kubeadm
warning: ignoring symlink /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes
... skipping 318 lines ...
go: warning: "k8s.io/kubernetes/vendor/github.com/go-bindata/go-bindata/..." matched no packages
+++ [0325 17:50:24] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0325 17:51:01] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0325 17:51:02.698085   55880 serving.go:329] Generated self-signed cert in-memory
W0325 17:51:03.135806   55880 authentication.go:409] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:51:03.135859   55880 authentication.go:268] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0325 17:51:03.135871   55880 authentication.go:292] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0325 17:51:03.135886   55880 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:51:03.135899   55880 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0325 17:51:03.135925   55880 controllermanager.go:161] Version: v1.19.0-alpha.0.1088+d00f9c7c1091e3
I0325 17:51:03.139129   55880 secure_serving.go:178] Serving securely on [::]:10257
I0325 17:51:03.139359   55880 tlsconfig.go:240] Starting DynamicServingCertificateController
I0325 17:51:03.140176   55880 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0325 17:51:03.140321   55880 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 149 lines ...
I0325 17:51:04.193331   55880 daemon_controller.go:257] Starting daemon sets controller
I0325 17:51:04.193347   55880 shared_informer.go:225] Waiting for caches to sync for daemon sets
I0325 17:51:04.193402   55880 ttl_controller.go:118] Starting TTL controller
I0325 17:51:04.193413   55880 shared_informer.go:225] Waiting for caches to sync for TTL
I0325 17:51:04.193436   55880 gc_controller.go:89] Starting GC controller
I0325 17:51:04.193441   55880 shared_informer.go:225] Waiting for caches to sync for GC
E0325 17:51:04.193476   55880 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0325 17:51:04.193503   55880 controllermanager.go:525] Skipping "service"
I0325 17:51:04.194166   55880 controllermanager.go:533] Started "persistentvolume-binder"
I0325 17:51:04.194229   55880 pv_controller_base.go:295] Starting persistent volume controller
I0325 17:51:04.194246   55880 shared_informer.go:225] Waiting for caches to sync for persistent volume
I0325 17:51:04.194576   55880 controllermanager.go:533] Started "endpoint"
I0325 17:51:04.194617   55880 endpoints_controller.go:182] Starting endpoint controller
I0325 17:51:04.194633   55880 shared_informer.go:225] Waiting for caches to sync for endpoint
I0325 17:51:04.194879   55880 node_lifecycle_controller.go:78] Sending events to api server
E0325 17:51:04.194923   55880 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided
W0325 17:51:04.194934   55880 controllermanager.go:525] Skipping "cloud-node-lifecycle"
W0325 17:51:04.195321   55880 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0325 17:51:04.196107   55880 controllermanager.go:533] Started "attachdetach"
I0325 17:51:04.196333   55880 attach_detach_controller.go:338] Starting attach detach controller
I0325 17:51:04.196421   55880 shared_informer.go:225] Waiting for caches to sync for attach detach
I0325 17:51:04.196477   55880 controllermanager.go:533] Started "pv-protection"
I0325 17:51:04.198791   55880 pv_protection_controller.go:83] Starting PV protection controller
I0325 17:51:04.198811   55880 shared_informer.go:225] Waiting for caches to sync for PV protection
The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0325 17:51:04.239600   55880 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0325 17:51:04.269672   55880 shared_informer.go:232] Caches are synced for expand 
I0325 17:51:04.277231   55880 shared_informer.go:232] Caches are synced for namespace 
I0325 17:51:04.290404   55880 shared_informer.go:232] Caches are synced for ClusterRoleAggregator 
I0325 17:51:04.291124   55880 shared_informer.go:232] Caches are synced for service account 
I0325 17:51:04.293412   52411 controller.go:606] quota admission added evaluator for: serviceaccounts
I0325 17:51:04.293711   55880 shared_informer.go:232] Caches are synced for TTL 
I0325 17:51:04.298920   55880 shared_informer.go:232] Caches are synced for PV protection 
E0325 17:51:04.302576   55880 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0325 17:51:04.303080   55880 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0325 17:51:04.312281   55880 shared_informer.go:232] Caches are synced for certificate-csrapproving 
E0325 17:51:04.313936   55880 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   47s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 97 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0325 17:51:08] Creating namespace namespace-1585158668-20565
namespace/namespace-1585158668-20565 created
Context "test" modified.
+++ [0325 17:51:08] Testing RESTMapper
+++ [0325 17:51:09] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 58 lines ...
namespace/namespace-1585158673-32728 created
Context "test" modified.
+++ [0325 17:51:14] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 58 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 25 lines ...
namespace/namespace-1585158683-8600 created
Context "test" modified.
+++ [0325 17:51:24] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:155: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:156: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:157: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 459 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:189: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:197: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:201: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:205: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:209: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:214: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 19 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:258: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:264: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:268: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:274: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 206 lines ...
(Bpod/valid-pod patched
core.sh:517: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:522: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.2:
(Bpod/valid-pod patched
core.sh:538: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0325 17:52:01] "kubectl patch with resourceVersion 557" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:562: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0325 17:52:02.645199   55880 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:586: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:611: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:627: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 26 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:660: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:664: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:668: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:672: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:676: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 83 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0325 17:52:17] Creating namespace namespace-1585158737-10355
namespace/namespace-1585158737-10355 created
Context "test" modified.
+++ [0325 17:52:17] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0325 17:52:17] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0325 17:52:21.242090   52411 client.go:361] parsed scheme: "endpoint"
I0325 17:52:21.242172   52411 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 17:52:21.245815   52411 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 12 lines ...
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0325 17:52:23] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 20 lines ...
apps.sh:131: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1
(Bapps.sh:132: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1
(Bapps.sh:133: Successful get deployments my-depl {{.metadata.labels.l1}}: <no value>
(Bdeployment.apps "my-depl" deleted
replicaset.apps "my-depl-76fb9d7d7d" deleted
pod "my-depl-76fb9d7d7d-x9z8b" deleted
E0325 17:52:26.237856   55880 replica_set.go:535] sync "namespace-1585158744-1495/my-depl-76fb9d7d7d" failed with replicasets.apps "my-depl-76fb9d7d7d" not found
apps.sh:139: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:140: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:141: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:145: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0325 17:52:26.928739   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158744-1495", Name:"nginx", UID:"16b362a0-02b1-4aad-97ec-692ede6df5a7", APIVersion:"apps/v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9587c59df to 3
I0325 17:52:26.932928   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158744-1495", Name:"nginx-9587c59df", UID:"3cf5d666-0ddd-426c-89ed-8766e3134c88", APIVersion:"apps/v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-wg7dw
I0325 17:52:26.936368   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158744-1495", Name:"nginx-9587c59df", UID:"3cf5d666-0ddd-426c-89ed-8766e3134c88", APIVersion:"apps/v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-8tcws
I0325 17:52:26.937327   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158744-1495", Name:"nginx-9587c59df", UID:"3cf5d666-0ddd-426c-89ed-8766e3134c88", APIVersion:"apps/v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-9sj67
apps.sh:149: Successful get deployment nginx {{.metadata.name}}: nginx
(BI0325 17:52:31.265713   55880 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1585158733-11563
Successful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1585158744-1495\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1585158744-1495"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0325 17:52:36.609436   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158744-1495", Name:"nginx", UID:"900d3c22-1428-451e-bf89-174a58ad9642", APIVersion:"apps/v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6c499547c4 to 3
I0325 17:52:36.613714   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158744-1495", Name:"nginx-6c499547c4", UID:"0421547c-1594-4d7f-93b7-169b911c315a", APIVersion:"apps/v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-bcl9s
I0325 17:52:36.619166   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158744-1495", Name:"nginx-6c499547c4", UID:"0421547c-1594-4d7f-93b7-169b911c315a", APIVersion:"apps/v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-mphrm
I0325 17:52:36.619973   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158744-1495", Name:"nginx-6c499547c4", UID:"0421547c-1594-4d7f-93b7-169b911c315a", APIVersion:"apps/v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-289hl
Successful
message:        "name": "nginx2"
          "name": "nginx2"
has:"name": "nginx2"
E0325 17:52:41.064939   55880 replica_set.go:535] sync "namespace-1585158744-1495/nginx-6c499547c4" failed with Operation cannot be fulfilled on replicasets.apps "nginx-6c499547c4": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1585158744-1495/nginx-6c499547c4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0421547c-1594-4d7f-93b7-169b911c315a, UID in object meta: 
I0325 17:52:42.040264   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158744-1495", Name:"nginx", UID:"208da4d1-3f02-490e-8927-3377666ff845", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6c499547c4 to 3
I0325 17:52:42.043949   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158744-1495", Name:"nginx-6c499547c4", UID:"85f396ae-c676-46db-aad1-de1179577e89", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-xjzp4
I0325 17:52:42.050119   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158744-1495", Name:"nginx-6c499547c4", UID:"85f396ae-c676-46db-aad1-de1179577e89", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-4kgpp
I0325 17:52:42.050514   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158744-1495", Name:"nginx-6c499547c4", UID:"85f396ae-c676-46db-aad1-de1179577e89", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-5q6xd
Successful
message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 215 lines ...
+++ [0325 17:52:46] Creating namespace namespace-1585158766-9125
namespace/namespace-1585158766-9125 created
Context "test" modified.
+++ [0325 17:52:46] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1585158766-9125 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1585158766-9125 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0325 17:52:48.537333   67055 loader.go:375] Config loaded from file:  /tmp/tmp.fRyGwScrjj/.kube/config
I0325 17:52:48.538863   67055 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0325 17:52:48.574528   67055 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
I0325 17:52:48.576482   67055 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 496 lines ...
Successful
message:NAME    DATA   AGE
one     0      0s
three   0      0s
two     0      0s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [0325 17:52:55] Creating namespace namespace-1585158775-17342
namespace/namespace-1585158775-17342 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 104 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-03-25T17:52:56Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2020-03-25T17:52:56Z"}}, "name":"valid-pod", "namespace":"namespace-1585158775-17342", "resourceVersion":"751", "selfLink":"/api/v1/namespaces/namespace-1585158775-17342/pods/valid-pod", "uid":"d9fb0d6c-6410-4e7e-ad04-9df3b7745f78"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-03-25T17:52:56Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2020-03-25T17:52:56Z"}],"name":"valid-pod","namespace":"namespace-1585158775-17342","resourceVersion":"751","selfLink":"/api/v1/namespaces/namespace-1585158775-17342/pods/valid-pod","uid":"d9fb0d6c-6410-4e7e-ad04-9df3b7745f78"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-03-25T17:52:56Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2020-03-25T17:52:56Z]] name:valid-pod namespace:namespace-1585158775-17342 resourceVersion:751 selfLink:/api/v1/namespaces/namespace-1585158775-17342/pods/valid-pod uid:d9fb0d6c-6410-4e7e-ad04-9df3b7745f78] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 81 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 78 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 36 lines ...
+++ [0325 17:53:02] Creating namespace namespace-1585158782-6470
namespace/namespace-1585158782-6470 created
Context "test" modified.
+++ [0325 17:53:02] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0325 17:53:03] Creating namespace namespace-1585158783-21913
namespace/namespace-1585158783-21913 created
Context "test" modified.
+++ [0325 17:53:03] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0325 17:53:04.263482   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158783-21913", Name:"frontend", UID:"059b16fb-301d-48f9-b9d3-0a267a988f16", APIVersion:"apps/v1", ResourceVersion:"812", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qth7b
I0325 17:53:04.267930   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158783-21913", Name:"frontend", UID:"059b16fb-301d-48f9-b9d3-0a267a988f16", APIVersion:"apps/v1", ResourceVersion:"812", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2rfdr
I0325 17:53:04.268261   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158783-21913", Name:"frontend", UID:"059b16fb-301d-48f9-b9d3-0a267a988f16", APIVersion:"apps/v1", ResourceVersion:"812", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kb9qq
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-2rfdr does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-2rfdr does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"7e056dc1-ef65-4adf-9649-c1c77f8d09b3","resourceVersion":"832","creationTimestamp":"2020-03-25T17:53:05Z"}}
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"7e056dc1-ef65-4adf-9649-c1c77f8d09b3","resourceVersion":"833","creationTimestamp":"2020-03-25T17:53:05Z"},"data":{"key1":"config1"}}
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"7e056dc1-ef65-4adf-9649-c1c77f8d09b3","resourceVersion":"833","creationTimestamp":"2020-03-25T17:53:05Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"7e056dc1-ef65-4adf-9649-c1c77f8d09b3"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 166 lines ...
valid-pod   0/1     Pending   0          0s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 240 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0325 17:53:20] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 300 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0325 17:53:53] Testing recursive resources
+++ [0325 17:53:53] Creating namespace namespace-1585158833-14263
namespace/namespace-1585158833-14263 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0325 17:53:54.176910   52411 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0325 17:53:54.178568   55880 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0325 17:53:54.179500   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0325 17:53:54.317281   52411 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0325 17:53:54.318758   55880 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0325 17:53:54.319843   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0325 17:53:54.441284   52411 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0325 17:53:54.442481   55880 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0325 17:53:54.443522   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0325 17:53:54.614227   52411 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0325 17:53:54.615526   55880 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0325 17:53:54.616442   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1585158833-14263
Priority:     0
Node:         <none>
... skipping 159 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0325 17:53:56.371421   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
E0325 17:53:56.642878   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0325 17:53:56.776173   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0325 17:53:56.881847   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx created
I0325 17:53:56.888668   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158833-14263", Name:"nginx", UID:"8a0b7015-b1c0-47c0-b0a3-980bb9f80f00", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9c6f87b75 to 3
I0325 17:53:56.895320   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158833-14263", Name:"nginx-9c6f87b75", UID:"6bc714f6-a5c4-47d2-815a-d782895a3eaf", APIVersion:"apps/v1", ResourceVersion:"1031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-n4sck
I0325 17:53:56.899215   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158833-14263", Name:"nginx-9c6f87b75", UID:"6bc714f6-a5c4-47d2-815a-d782895a3eaf", APIVersion:"apps/v1", ResourceVersion:"1031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-7th6k
I0325 17:53:56.900627   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158833-14263", Name:"nginx-9c6f87b75", UID:"6bc714f6-a5c4-47d2-815a-d782895a3eaf", APIVersion:"apps/v1", ResourceVersion:"1031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-7xfdj
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
... skipping 47 lines ...
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
I0325 17:53:57.819554   55880 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0325 17:53:59.675244   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158833-14263", Name:"busybox0", UID:"51063f52-a31f-4285-a033-44340a8a8cdc", APIVersion:"v1", ResourceVersion:"1063", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-zf84l
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0325 17:53:59.681032   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158833-14263", Name:"busybox1", UID:"6e455a2f-8b59-45a8-aee8-f204ac09f3ff", APIVersion:"v1", ResourceVersion:"1065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-lbfsb
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(BE0325 17:54:01.078335   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE0325 17:54:01.924411   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0325 17:54:02.030954   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158833-14263", Name:"busybox0", UID:"51063f52-a31f-4285-a033-44340a8a8cdc", APIVersion:"v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-4hdx5
I0325 17:54:02.043111   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158833-14263", Name:"busybox1", UID:"6e455a2f-8b59-45a8-aee8-f204ac09f3ff", APIVersion:"v1", ResourceVersion:"1094", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-dsj6b
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E0325 17:54:02.314448   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0325 17:54:02.984995   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158833-14263", Name:"nginx1-deployment", UID:"773d4329-bb19-4f79-bd2a-84ad71fd03ca", APIVersion:"apps/v1", ResourceVersion:"1111", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-866c6857d5 to 2
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0325 17:54:02.989577   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158833-14263", Name:"nginx1-deployment-866c6857d5", UID:"c8609e6a-3f1b-4c35-8bac-8fc6db97f747", APIVersion:"apps/v1", ResourceVersion:"1112", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-866c6857d5-scq69
I0325 17:54:02.991203   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158833-14263", Name:"nginx0-deployment", UID:"6f42d350-daa8-4370-8d9f-8a3e363c7b46", APIVersion:"apps/v1", ResourceVersion:"1113", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-ff7db88b6 to 2
I0325 17:54:02.995000   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158833-14263", Name:"nginx1-deployment-866c6857d5", UID:"c8609e6a-3f1b-4c35-8bac-8fc6db97f747", APIVersion:"apps/v1", ResourceVersion:"1112", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-866c6857d5-hnj4b
I0325 17:54:02.997150   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158833-14263", Name:"nginx0-deployment-ff7db88b6", UID:"c327c065-af67-4e32-8708-52e5b96fafd4", APIVersion:"apps/v1", ResourceVersion:"1117", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-ff7db88b6-gzn64
I0325 17:54:03.009061   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158833-14263", Name:"nginx0-deployment-ff7db88b6", UID:"c327c065-af67-4e32-8708-52e5b96fafd4", APIVersion:"apps/v1", ResourceVersion:"1117", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-ff7db88b6-6gdjv
E0325 17:54:03.120592   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0325 17:54:05.679781   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158833-14263", Name:"busybox0", UID:"9906d153-155e-4870-a89a-e3908e0f16e5", APIVersion:"v1", ResourceVersion:"1161", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cmmtt
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0325 17:54:05.688602   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158833-14263", Name:"busybox1", UID:"b7e60ce9-a85d-4144-8102-691fb2709039", APIVersion:"v1", ResourceVersion:"1163", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-82zkk
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0325 17:54:07] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
I0325 17:54:08.878641   55880 shared_informer.go:225] Waiting for caches to sync for resource quota
I0325 17:54:08.878699   55880 shared_informer.go:232] Caches are synced for resource quota 
core.sh:1413: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
I0325 17:54:09.390674   55880 shared_informer.go:225] Waiting for caches to sync for garbage collector
I0325 17:54:09.390744   55880 shared_informer.go:232] Caches are synced for garbage collector 
E0325 17:54:10.178876   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0325 17:54:11.044677   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0325 17:54:12.923483   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0325 17:54:14.148074   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1422: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
... skipping 28 lines ...
namespace "namespace-1585158787-21535" deleted
namespace "namespace-1585158787-26215" deleted
namespace "namespace-1585158790-4888" deleted
namespace "namespace-1585158792-21557" deleted
namespace "namespace-1585158794-17" deleted
namespace "namespace-1585158833-14263" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1585158665-30526" deleted
... skipping 26 lines ...
namespace "namespace-1585158787-21535" deleted
namespace "namespace-1585158787-26215" deleted
namespace "namespace-1585158790-4888" deleted
namespace "namespace-1585158792-21557" deleted
namespace "namespace-1585158794-17" deleted
namespace "namespace-1585158833-14263" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1429: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1430: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
I0325 17:54:15.322189   55880 horizontal.go:354] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1585158833-14263
... skipping 10 lines ...
core.sh:1453: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1457: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1461: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1463: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1470: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1474: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 85 lines ...
(Bnamespace/test-secrets created
core.sh:804: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
(Bcore.sh:808: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:812: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:813: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
(BE0325 17:54:29.222365   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
core.sh:823: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:827: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:828: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(Bsecret "test-secret" deleted
E0325 17:54:30.063348   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:838: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:841: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:842: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(Bsecret "test-secret" deleted
secret/test-secret created
... skipping 6 lines ...
(Bcore.sh:873: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:882: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
I0325 17:54:32.647353   55880 namespace_controller.go:185] Namespace has been deleted other
E0325 17:54:33.023533   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0325 17:54:37.126561   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 30 lines ...
+++ command: run_client_config_tests
+++ [0325 17:54:45] Creating namespace namespace-1585158885-27864
namespace/namespace-1585158885-27864 created
Context "test" modified.
+++ [0325 17:54:45] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 43 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 38 lines ...
                job-name=test-job
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Wed, 25 Mar 2020 17:54:55 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=5d669dc9-dd20-4225-87fd-9acd9fdb455d
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 60 lines ...
podtemplate/nginx created
core.sh:1539: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BNAME    CONTAINERS   IMAGES   POD LABELS
nginx   nginx        nginx    name=nginx
core.sh:1547: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bpodtemplate "nginx" deleted
E0325 17:55:04.371348   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1551: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ exit code: 0
Recording: run_service_tests
Running command: run_service_tests

+++ Running case: test-cmd.run_service_tests 
... skipping 375 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:980: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:993: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1000: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1004: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
core.sh:1008: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bcore.sh:1012: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice/service-v1-test created
core.sh:1033: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(BE0325 17:55:09.571783   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/service-v1-test replaced
core.sh:1040: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(Bservice "redis-master" deleted
service "service-v1-test" deleted
core.sh:1048: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1052: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
... skipping 35 lines ...
(Bcore.sh:1113: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
(Bservice/exposemetadata exposed
core.sh:1119: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
(Bservice "exposemetadata" deleted
service "testmetadata" deleted
pod "testmetadata" deleted
E0325 17:55:14.187314   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_daemonset_tests
Running command: run_daemonset_tests

+++ Running case: test-cmd.run_daemonset_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 25 lines ...
+++ Running case: test-cmd.run_daemonset_history_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_daemonset_history_tests
+++ [0325 17:55:16] Creating namespace namespace-1585158916-20201
namespace/namespace-1585158916-20201 created
Context "test" modified.
E0325 17:55:16.766970   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [0325 17:55:16] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdaemonset.apps/bind created
apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1585158916-20201"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
... skipping 22 lines ...
(Bapps.sh:85: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:86: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0325 17:55:20.245418   55880 daemon_controller.go:292] namespace-1585158916-20201/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1585158916-20201", SelfLink:"/apis/apps/v1/namespaces/namespace-1585158916-20201/daemonsets/bind", UID:"56c26138-9924-461d-84ab-18c921eed11e", ResourceVersion:"1689", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720755717, loc:(*time.Location)(0x6cea280)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1585158916-20201\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc003461020), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003461040)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc003461060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003461080)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0034610a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00287c058), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000142070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0034610c0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0004988c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00287c0ac)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
... skipping 32 lines ...
Namespace:    namespace-1585158920-29891
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1585158920-29891
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1585158920-29891
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1585158920-29891
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1585158920-29891
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1585158920-29891
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1585158920-29891
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1585158920-29891
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 15 lines ...
(Bcore.sh:1178: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0325 17:55:23.464576   55880 replica_set.go:200] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1585158920-29891 /api/v1/namespaces/namespace-1585158920-29891/replicationcontrollers/frontend fc5c023b-a096-4308-8d6d-c505d6882cbe 1726 2 2020-03-25 17:55:21 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kube-controller-manager Update v1 2020-03-25 17:55:21 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}} {kubectl Update v1 2020-03-25 17:55:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{},"f:tier":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002823eb8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0325 17:55:23.471269   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158920-29891", Name:"frontend", UID:"fc5c023b-a096-4308-8d6d-c505d6882cbe", APIVersion:"v1", ResourceVersion:"1726", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-w84vb
core.sh:1182: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1186: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1190: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1194: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0325 17:55:24.156849   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158920-29891", Name:"frontend", UID:"fc5c023b-a096-4308-8d6d-c505d6882cbe", APIVersion:"v1", ResourceVersion:"1732", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-txdhn
core.sh:1198: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1202: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 31 lines ...
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0325 17:55:26.895966   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment", UID:"27b54876-2bda-49ac-80f0-cd8e9d4a538f", APIVersion:"apps/v1", ResourceVersion:"1837", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6866878c7b to 3
I0325 17:55:26.903480   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-6866878c7b", UID:"c1e44dcf-eb48-4310-aa4d-496f8634f7d7", APIVersion:"apps/v1", ResourceVersion:"1838", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-bl5xm
I0325 17:55:26.918334   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-6866878c7b", UID:"c1e44dcf-eb48-4310-aa4d-496f8634f7d7", APIVersion:"apps/v1", ResourceVersion:"1838", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-lkwmd
... skipping 23 lines ...
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1345: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1349: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1358: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0325 17:55:34.142466   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources", UID:"8abcaf04-0031-4fdd-bed5-3844ad5ff6f5", APIVersion:"apps/v1", ResourceVersion:"2003", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-79666b9cd9 to 3
I0325 17:55:34.146261   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources-79666b9cd9", UID:"0263509e-ae0f-410f-85d2-a5cc0a02c3ac", APIVersion:"apps/v1", ResourceVersion:"2004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-49p7b
I0325 17:55:34.155901   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources-79666b9cd9", UID:"0263509e-ae0f-410f-85d2-a5cc0a02c3ac", APIVersion:"apps/v1", ResourceVersion:"2004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-hpt42
I0325 17:55:34.155949   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources-79666b9cd9", UID:"0263509e-ae0f-410f-85d2-a5cc0a02c3ac", APIVersion:"apps/v1", ResourceVersion:"2004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-l5j4n
core.sh:1364: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0325 17:55:34.640128   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources", UID:"8abcaf04-0031-4fdd-bed5-3844ad5ff6f5", APIVersion:"apps/v1", ResourceVersion:"2017", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-8b888884f to 1
I0325 17:55:34.644778   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources-8b888884f", UID:"f6c87496-a99f-4b1d-8e4d-aebae751c868", APIVersion:"apps/v1", ResourceVersion:"2018", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8b888884f-ddzds
core.sh:1369: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0325 17:55:35.136816   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources", UID:"8abcaf04-0031-4fdd-bed5-3844ad5ff6f5", APIVersion:"apps/v1", ResourceVersion:"2027", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-79666b9cd9 to 2
I0325 17:55:35.144235   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources-79666b9cd9", UID:"0263509e-ae0f-410f-85d2-a5cc0a02c3ac", APIVersion:"apps/v1", ResourceVersion:"2031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-79666b9cd9-hpt42
I0325 17:55:35.147146   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources", UID:"8abcaf04-0031-4fdd-bed5-3844ad5ff6f5", APIVersion:"apps/v1", ResourceVersion:"2029", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-76f48f979f to 1
I0325 17:55:35.153363   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158920-29891", Name:"nginx-deployment-resources-76f48f979f", UID:"c4d758c8-4631-4997-bd9e-f7ec5e75d9f2", APIVersion:"apps/v1", ResourceVersion:"2035", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-76f48f979f-xmtgp
core.sh:1375: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 363 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1386: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1387: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1388: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 47 lines ...
                pod-template-hash=c9cc54d87
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=c9cc54d87
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 98 lines ...
(BWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/nginx configured
I0325 17:55:44.791998   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158936-21930", Name:"nginx", UID:"ce76f5dc-acb4-4e61-8154-7d6df47c4f7a", APIVersion:"apps/v1", ResourceVersion:"2229", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-697546885c to 1
I0325 17:55:44.797483   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158936-21930", Name:"nginx-697546885c", UID:"c4615c97-1e21-4d64-bb4c-3ddbb7503ff3", APIVersion:"apps/v1", ResourceVersion:"2230", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-697546885c-2crrx
apps.sh:301: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
E0325 17:55:45.065831   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx rolled back (server dry run)
apps.sh:305: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:309: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:312: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
E0325 17:55:47.517145   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0325 17:55:47.805570   55880 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1585158920-29891
apps.sh:316: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0325 17:55:49.523028   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158936-21930", Name:"nginx", UID:"ce76f5dc-acb4-4e61-8154-7d6df47c4f7a", APIVersion:"apps/v1", ResourceVersion:"2264", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-697546885c to 0
I0325 17:55:49.532521   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158936-21930", Name:"nginx-697546885c", UID:"c4615c97-1e21-4d64-bb4c-3ddbb7503ff3", APIVersion:"apps/v1", ResourceVersion:"2268", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-697546885c-2crrx
I0325 17:55:49.534682   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158936-21930", Name:"nginx", UID:"ce76f5dc-acb4-4e61-8154-7d6df47c4f7a", APIVersion:"apps/v1", ResourceVersion:"2267", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-794bb44549 to 1
I0325 17:55:49.540798   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158936-21930", Name:"nginx-794bb44549", UID:"3f15e8c0-7504-46c4-a937-442410424ce5", APIVersion:"apps/v1", ResourceVersion:"2272", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-794bb44549-l9xcw
Successful
... skipping 146 lines ...
(Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0325 17:55:52.717098   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment", UID:"ba63a716-4ad5-4ca8-be8d-85c2d415be59", APIVersion:"apps/v1", ResourceVersion:"2334", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6d5f69bf98 to 1
I0325 17:55:52.724058   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment-6d5f69bf98", UID:"d764d4f5-ae9b-422c-84e0-c045f46c2758", APIVersion:"apps/v1", ResourceVersion:"2335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6d5f69bf98-t5rrw
apps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:369: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 46 lines ...
I0325 17:55:57.687771   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment-85f7d5566f", UID:"50782337-c9a5-4b1b-8fd7-0d0b289deade", APIVersion:"apps/v1", ResourceVersion:"2466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85f7d5566f-bbk5t
deployment.apps/nginx-deployment env updated
I0325 17:55:57.805381   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment", UID:"0c25215f-2919-4aaf-bca6-e0250bab5dca", APIVersion:"apps/v1", ResourceVersion:"2473", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5d757cf5f8 to 0
I0325 17:55:57.820018   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment", UID:"0c25215f-2919-4aaf-bca6-e0250bab5dca", APIVersion:"apps/v1", ResourceVersion:"2475", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-75bb56f9c to 1
deployment.apps/nginx-deployment env updated
I0325 17:55:57.991522   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment-5d757cf5f8", UID:"04b1ae9b-4de6-4f62-a369-db67b1985387", APIVersion:"apps/v1", ResourceVersion:"2476", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5d757cf5f8-5szbh
E0325 17:55:58.031652   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment env updated
I0325 17:55:58.085139   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment", UID:"0c25215f-2919-4aaf-bca6-e0250bab5dca", APIVersion:"apps/v1", ResourceVersion:"2485", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5859b66c86 to 0
deployment.apps "nginx-deployment" deleted
I0325 17:55:58.233934   55880 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment", UID:"0c25215f-2919-4aaf-bca6-e0250bab5dca", APIVersion:"apps/v1", ResourceVersion:"2491", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dfd7cb955 to 1
E0325 17:55:58.285936   55880 replica_set.go:535] sync "namespace-1585158936-21930/nginx-deployment-85f7d5566f" failed with replicasets.apps "nginx-deployment-85f7d5566f" not found
configmap "test-set-env-config" deleted
I0325 17:55:58.336579   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment-5859b66c86", UID:"7d5ae8c1-b1d6-404d-8e69-97008c25af91", APIVersion:"apps/v1", ResourceVersion:"2492", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5859b66c86-q4fr8
I0325 17:55:58.353409   55880 horizontal.go:354] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1585158936-21930
E0325 17:55:58.386994   55880 replica_set.go:535] sync "namespace-1585158936-21930/nginx-deployment-75bb56f9c" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-75bb56f9c": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1585158936-21930/nginx-deployment-75bb56f9c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: b4bdc6f9-e3a7-4bcb-9a43-13d7b06fd18f, UID in object meta: 
I0325 17:55:58.439085   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158936-21930", Name:"nginx-deployment-dfd7cb955", UID:"8a3be253-48a7-47fa-aec8-ec6c15a1fdc9", APIVersion:"apps/v1", ResourceVersion:"2510", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dfd7cb955-hb9wh
secret "test-set-env-secret" deleted
E0325 17:55:58.486087   55880 replica_set.go:535] sync "namespace-1585158936-21930/nginx-deployment-5d757cf5f8" failed with replicasets.apps "nginx-deployment-5d757cf5f8" not found
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0325 17:55:58] Creating namespace namespace-1585158958-31649
E0325 17:55:58.736352   55880 replica_set.go:535] sync "namespace-1585158936-21930/nginx-deployment-5859b66c86" failed with replicasets.apps "nginx-deployment-5859b66c86" not found
namespace/namespace-1585158958-31649 created
E0325 17:55:58.786364   55880 replica_set.go:535] sync "namespace-1585158936-21930/nginx-deployment-dfd7cb955" failed with replicasets.apps "nginx-deployment-dfd7cb955" not found
Context "test" modified.
+++ [0325 17:55:58] Testing kubectl(v1:replicasets)
apps.sh:533: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0325 17:55:59.214591   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158958-31649", Name:"frontend", UID:"c4dff215-7b00-43b3-af04-b601633531dd", APIVersion:"apps/v1", ResourceVersion:"2522", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4b4gp
I0325 17:55:59.220200   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158958-31649", Name:"frontend", UID:"c4dff215-7b00-43b3-af04-b601633531dd", APIVersion:"apps/v1", ResourceVersion:"2522", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p87cr
I0325 17:55:59.220242   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158958-31649", Name:"frontend", UID:"c4dff215-7b00-43b3-af04-b601633531dd", APIVersion:"apps/v1", ResourceVersion:"2522", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-48zlq
+++ [0325 17:55:59] Deleting rs
replicaset.apps "frontend" deleted
E0325 17:55:59.436639   55880 replica_set.go:535] sync "namespace-1585158958-31649/frontend" failed with replicasets.apps "frontend" not found
apps.sh:539: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:543: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0325 17:55:59.901258   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158958-31649", Name:"frontend", UID:"c38175ad-8993-4832-93c9-3697131a4ea6", APIVersion:"apps/v1", ResourceVersion:"2541", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-q46b5
I0325 17:55:59.905556   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158958-31649", Name:"frontend", UID:"c38175ad-8993-4832-93c9-3697131a4ea6", APIVersion:"apps/v1", ResourceVersion:"2541", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5dzxk
I0325 17:55:59.907322   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585158958-31649", Name:"frontend", UID:"c38175ad-8993-4832-93c9-3697131a4ea6", APIVersion:"apps/v1", ResourceVersion:"2541", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2xwz7
apps.sh:547: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0325 17:56:00] Deleting rs
replicaset.apps "frontend" deleted
E0325 17:56:00.286310   55880 replica_set.go:535] sync "namespace-1585158958-31649/frontend" failed with replicasets.apps "frontend" not found
apps.sh:551: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:553: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-2xwz7" deleted
pod "frontend-5dzxk" deleted
pod "frontend-q46b5" deleted
apps.sh:556: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1585158958-31649
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1585158958-31649
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1585158958-31649
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1585158958-31649
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1585158958-31649
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1585158958-31649
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1585158958-31649
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1585158958-31649
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 198 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:680: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:684: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 38 lines ...
apps.sh:450: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:451: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:452: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bapps.sh:453: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1585158973-17066"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.7","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1585158973-17066"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.8","name":"nginx","ports":[{"containerPort":80,"name":"web"}]},{"image":"k8s.gcr.io/pause:2.0","name":"pause","ports":[{"containerPort":81,"name":"web-2"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(BE0325 17:56:15.224552   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx will roll back to Pod Template:
  Labels:	app=nginx-statefulset
  Containers:
   nginx:
    Image:	k8s.gcr.io/nginx-slim:0.7
    Port:	80/TCP
... skipping 11 lines ...
(Bapps.sh:458: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:459: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:462: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:463: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:467: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:468: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:471: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:472: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
Name:         mock
Namespace:    namespace-1585158978-21919
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 56 lines ...
Name:         mock
Namespace:    namespace-1585158978-21919
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 57 lines ...
Name:         mock
Namespace:    namespace-1585158978-21919
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 41 lines ...
Namespace:    namespace-1585158978-21919
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1585158978-21919
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 2 lines ...
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: mock2-6czn7
E0325 17:56:29.162448   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller "mock" deleted
replicationcontroller "mock2" deleted
replicationcontroller/mock replaced
replicationcontroller/mock2 replaced
I0325 17:56:29.393070   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158978-21919", Name:"mock", UID:"bb866b5b-9d26-47bb-9bbf-4518ff8ec4af", APIVersion:"v1", ResourceVersion:"3014", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-64btc
I0325 17:56:29.396717   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585158978-21919", Name:"mock2", UID:"8ec3d3c9-be9d-4228-a9d2-b786a27721cc", APIVersion:"v1", ResourceVersion:"3015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock2-t5btp
... skipping 13 lines ...
(Bgeneric-resources.sh:161: Successful get rc mock2 {{.metadata.annotations.annotated}}: true
(Breplicationcontroller "mock" deleted
replicationcontroller "mock2" deleted
Testing with file hack/testdata/multi-resource-svclist.json and replace with file hack/testdata/multi-resource-svclist-modify.json
generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0325 17:56:31.483739   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/mock created
service/mock2 created
generic-resources.sh:70: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock:mock2:
(BNAME    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
mock    ClusterIP   10.0.0.28    <none>        99/TCP    0s
mock2   ClusterIP   10.0.0.40    <none>        99/TCP    0s
... skipping 20 lines ...
IP:                10.0.0.40
Port:              <unset>  99/TCP
TargetPort:        9949/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
E0325 17:56:32.124351   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "mock" deleted
service "mock2" deleted
service/mock replaced
service/mock2 replaced
generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced
(Bgeneric-resources.sh:98: Successful get services mock2 {{.metadata.labels.status}}: replaced
... skipping 38 lines ...
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0325 17:56:37.080511   55880 pv_protection_controller.go:118] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bpersistentvolume "pv0003" deleted
storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0325 17:56:37.671524   55880 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:warning: deleting cluster-scoped resources
Successful
... skipping 539 lines ...
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
... skipping 59 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:821: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:822: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:823: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:824: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 20 lines ...
replicationcontroller/cassandra created
I0325 17:56:47.912198   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585159007-15984", Name:"cassandra", UID:"44e15b7f-7d59-4ab6-87c0-18ce6704b630", APIVersion:"v1", ResourceVersion:"3183", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-2wzdn
I0325 17:56:47.917977   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585159007-15984", Name:"cassandra", UID:"44e15b7f-7d59-4ab6-87c0-18ce6704b630", APIVersion:"v1", ResourceVersion:"3183", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-lqgts
service/cassandra created
Waiting for Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}} : expected: cassandra:cassandra:cassandra:cassandra::, got: cassandra:cassandra:cassandra:cassandra:

discovery.sh:91: FAIL!
Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}
  Expected: cassandra:cassandra:cassandra:cassandra::
  Got:      cassandra:cassandra:cassandra:cassandra:
(B
55 /home/prow/go/src/k8s.io/kubernetes/hack/lib/test.sh
(B
discovery.sh:92: Successful get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(Bpod "cassandra-2wzdn" deleted
I0325 17:56:48.708225   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585159007-15984", Name:"cassandra", UID:"44e15b7f-7d59-4ab6-87c0-18ce6704b630", APIVersion:"v1", ResourceVersion:"3189", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-9k6v8
I0325 17:56:48.710578   55880 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"namespace-1585159007-15984", Name:"cassandra", UID:"3888a911-0766-49d2-b162-817188a50316", APIVersion:"v1", ResourceVersion:"3192", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint namespace-1585159007-15984/cassandra: Operation cannot be fulfilled on endpoints "cassandra": the object has been modified; please apply your changes to the latest version and try again
I0325 17:56:48.720499   55880 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585159007-15984", Name:"cassandra", UID:"44e15b7f-7d59-4ab6-87c0-18ce6704b630", APIVersion:"v1", ResourceVersion:"3189", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-x4jg8
pod "cassandra-lqgts" deleted
replicationcontroller "cassandra" deleted
E0325 17:56:48.731896   55880 replica_set.go:535] sync "namespace-1585159007-15984/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1585159007-15984/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 44e15b7f-7d59-4ab6-87c0-18ce6704b630, UID in object meta: 
service "cassandra" deleted
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
... skipping 352 lines ...
namespace-1585158998-29483   default   0         17s
namespace-1585159007-15984   default   0         8s
some-other-random            default   0         9s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
namespace "all-ns-test-2" deleted
E0325 17:57:00.856487   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0325 17:57:05.440405   55880 namespace_controller.go:185] Namespace has been deleted all-ns-test-1
get.sh:376: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:380: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:384: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:
... skipping 223 lines ...
Successful
message:valid-pod:
has:valid-pod:
Successful
message:kubernetes:
has:kubernetes:
E0325 17:57:11.470620   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:valid-pod:
has:valid-pod:
Successful
message:foo:
has:foo:
... skipping 414 lines ...
message:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:142: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:147: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
message:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
Successful
message:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
message:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 14 lines ...
+++ [0325 17:57:25] Testing kubectl plugins
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
message:I am plugin foo
has:plugin foo
E0325 17:57:26.061044   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
has:test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
Successful
message:Client Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.0-alpha.0.1088+d00f9c7c1091e3", GitCommit:"d00f9c7c1091e31c75c6636500095c4e490b8db8", GitTreeState:"clean", BuildDate:"2020-03-25T15:49:03Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
has:Client Version
... skipping 5 lines ...
Running command: run_impersonation_tests

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0325 17:57:26] Testing impersonation
E0325 17:57:26.431921   55880 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: requesting groups or user-extra for  without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 57 lines ...
I0325 17:57:30.985342   52411 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0325 17:57:30.985581   52411 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0325 17:57:30.985666   52411 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0325 17:57:30.985968   52411 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0325 17:57:30.986014   52411 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0325 17:57:30.986050   52411 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
E0325 17:57:30.986063   52411 controller.go:184] rpc error: code = Unavailable desc = transport is closing
I0325 17:57:30.986104   52411 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
junit report dir: /logs/artifacts
+++ [0325 17:57:31] Clean up complete
+ make test-integration
warning: ignoring symlink /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes
go: warning: "k8s.io/kubernetes/vendor/github.com/go-bindata/go-bindata/..." matched no packages
... skipping 339 lines ...
    synthetic_master_test.go:722: UPDATE_NODE_APISERVER is not set

=== SKIP: test/integration/scheduler_perf TestSchedule100Node3KPods (0.00s)
    scheduler_test.go:73: Skipping because we want to run short tests


=== Failed
=== FAIL: test/integration/scheduler TestPreScorePlugin (4.33s)
W0325 18:06:13.922910  113876 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0325 18:06:13.922938  113876 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0325 18:06:13.922951  113876 master.go:314] Node port range unspecified. Defaulting to 30000-32767.
I0325 18:06:13.922967  113876 master.go:270] Using reconciler: 
I0325 18:06:13.923107  113876 config.go:627] Not requested to run hook priority-and-fairness-config-consumer
I0325 18:06:13.924870  113876 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
... skipping 490 lines ...
W0325 18:06:14.188543  113876 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0325 18:06:14.189352  113876 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.191164  113876 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.192020  113876 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.192549  113876 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.194410  113876 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2bec3490-a0f6-4fd7-adfb-80fcd75edd6e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0325 18:06:14.198720  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.198765  113876 healthz.go:186] healthz check poststarthook/bootstrap-controller failed: not finished
I0325 18:06:14.198777  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.198790  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.198801  113876 healthz.go:186] healthz check poststarthook/start-cluster-authentication-info-controller failed: not finished
I0325 18:06:14.198818  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/start-cluster-authentication-info-controller failed: reason withheld
healthz check failed
W0325 18:06:14.198739  113876 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0325 18:06:14.198903  113876 httplog.go:90] verb="GET" URI="/healthz" latency=356.673µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.199046  113876 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0325 18:06:14.199261  113876 shared_informer.go:225] Waiting for caches to sync for cluster_authentication_trust_controller
I0325 18:06:14.199512  113876 reflector.go:175] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0325 18:06:14.199560  113876 reflector.go:211] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0325 18:06:14.200258  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0" latency=351.271µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.200450  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.891462ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33768": 
I0325 18:06:14.202952  113876 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=34749 labels= fields= timeout=8m34s
I0325 18:06:14.203706  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.184478ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.208528  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.251215ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.211025  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.211091  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.211105  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.211114  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.211170  113876 httplog.go:90] verb="GET" URI="/healthz" latency=253.424µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.213123  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.386615ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:14.213245  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.778652ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.214440  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.464237ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33772": 
I0325 18:06:14.216031  113876 httplog.go:90] verb="GET" URI="/api/v1/services" latency=847.799µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33772": 
I0325 18:06:14.217244  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=2.40421ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
... skipping 4 lines ...
I0325 18:06:14.225299  113876 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.660546ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.226951  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.229853ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.228604  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.226371ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.230096  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency=1.092831ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.299533  113876 shared_informer.go:255] caches populated
I0325 18:06:14.299595  113876 shared_informer.go:232] Caches are synced for cluster_authentication_trust_controller 
I0325 18:06:14.300071  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.300107  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.300118  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.300129  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.300218  113876 httplog.go:90] verb="GET" URI="/healthz" latency=287.359µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.312020  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.312070  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.312084  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.312094  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.312179  113876 httplog.go:90] verb="GET" URI="/healthz" latency=300.795µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.400106  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.400150  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.400162  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.400171  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.400255  113876 httplog.go:90] verb="GET" URI="/healthz" latency=306.282µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.412035  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.412078  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.412092  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.412109  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.412180  113876 httplog.go:90] verb="GET" URI="/healthz" latency=320.563µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.500631  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.500675  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.500688  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.500696  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.500782  113876 httplog.go:90] verb="GET" URI="/healthz" latency=308.614µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.511956  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.511996  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.512016  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.512026  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.512103  113876 httplog.go:90] verb="GET" URI="/healthz" latency=271.538µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.600072  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.600121  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.600134  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.600143  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.600212  113876 httplog.go:90] verb="GET" URI="/healthz" latency=267.135µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.612032  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.612070  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.612090  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.612100  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.612172  113876 httplog.go:90] verb="GET" URI="/healthz" latency=287.218µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.700105  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.700145  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.700158  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.700168  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.700244  113876 httplog.go:90] verb="GET" URI="/healthz" latency=254.161µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.712059  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.712100  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.712115  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.712124  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.712204  113876 httplog.go:90] verb="GET" URI="/healthz" latency=265.187µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.800183  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.800227  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.800244  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.800254  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.800352  113876 httplog.go:90] verb="GET" URI="/healthz" latency=325.517µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.811965  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.812009  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.812023  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.812046  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.812160  113876 httplog.go:90] verb="GET" URI="/healthz" latency=318.934µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.900252  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.900313  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.900355  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.900369  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.900534  113876 httplog.go:90] verb="GET" URI="/healthz" latency=476.271µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:14.912033  113876 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0325 18:06:14.912071  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:14.912093  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:14.912113  113876 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:14.912192  113876 httplog.go:90] verb="GET" URI="/healthz" latency=291µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:14.923017  113876 client.go:361] parsed scheme: "endpoint"
I0325 18:06:14.923111  113876 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0325 18:06:15.001371  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.001425  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.001436  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.001534  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.542233ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:15.014379  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.014426  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.014437  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.014537  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.493634ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.101368  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.101400  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.101411  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.101493  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.454922ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:15.112956  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.112993  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.113005  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.113087  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.213711ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.200353  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.464235ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.201384  113876 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical" latency=1.618765ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.202351  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.202376  113876 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0325 18:06:15.202385  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.202445  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.244844ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33766": 
I0325 18:06:15.202683  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.51621ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33936": 
I0325 18:06:15.204567  113876 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=2.523773ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.205105  113876 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0325 18:06:15.205249  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=1.823547ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.207159  113876 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical" latency=1.407419ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.207511  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=1.384805ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.209605  113876 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=1.948076ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.209840  113876 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0325 18:06:15.209867  113876 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0325 18:06:15.212047  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=4.032644ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.212595  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.212624  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.212678  113876 httplog.go:90] verb="GET" URI="/healthz" latency=936.41µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.213529  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=932.425µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33766": 
I0325 18:06:15.215144  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=1.083216ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.216692  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=996.313µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.218035  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=885.384µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.219728  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin" latency=936.006µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
... skipping 21 lines ...
I0325 18:06:15.289085  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=11.90855ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.289343  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0325 18:06:15.291129  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=1.51264ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.297933  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=6.124776ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.298430  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0325 18:06:15.299877  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=1.160579ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.301763  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.301877  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.301987  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.079057ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.304785  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=4.272378ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.305104  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0325 18:06:15.306497  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster" latency=1.078261ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.309910  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.861179ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.310397  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0325 18:06:15.311919  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node" latency=1.302586ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.313145  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.313175  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.313229  113876 httplog.go:90] verb="GET" URI="/healthz" latency=920.38µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.315058  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.263053ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.315408  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node
I0325 18:06:15.318247  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector" latency=2.374986ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.320864  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.068644ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.321222  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
... skipping 30 lines ...
I0325 18:06:15.390283  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:legacy-unknown-approver" latency=1.11534ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.394493  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.487511ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.395066  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver
I0325 18:06:15.396802  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kubelet-serving-approver" latency=1.244663ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.399668  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.929799ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.400008  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver
I0325 18:06:15.401863  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.401925  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.402125  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.952585ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.402935  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver" latency=1.080938ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.405444  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.998535ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.405691  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-approver
I0325 18:06:15.407298  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver" latency=1.327853ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.410309  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.309329ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.410619  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
I0325 18:06:15.411959  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier" latency=1.04871ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.412735  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.412767  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.412844  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.029288ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.414894  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.282948ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.415249  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0325 18:06:15.416461  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler" latency=972.715µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.419847  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.633148ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.420192  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
... skipping 18 lines ...
I0325 18:06:15.459145  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller" latency=2.780286ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.481012  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=21.008414ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.481475  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0325 18:06:15.486854  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller" latency=1.871974ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.490577  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.14492ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.491154  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0325 18:06:15.503174  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.503214  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.503295  113876 httplog.go:90] verb="GET" URI="/healthz" latency=3.24515ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.504537  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller" latency=11.946597ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.508149  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.956455ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.508462  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0325 18:06:15.509875  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector" latency=1.122594ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.513029  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.600179ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.513277  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0325 18:06:15.514638  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler" latency=1.058614ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.518384  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.386559ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.519043  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0325 18:06:15.519983  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.520042  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.520110  113876 httplog.go:90] verb="GET" URI="/healthz" latency=8.40655ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.521256  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller" latency=1.681137ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.523895  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.045744ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.524235  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0325 18:06:15.543494  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller" latency=18.892916ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.552010  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=7.843376ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
... skipping 20 lines ...
I0325 18:06:15.595208  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.168242ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.595510  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0325 18:06:15.596899  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller" latency=1.083215ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.599030  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.644237ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.599321  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0325 18:06:15.600754  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller" latency=1.13402ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.600977  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.601004  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.601059  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.249791ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.603220  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.762156ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.603525  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0325 18:06:15.605031  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller" latency=1.277641ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.607935  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.413755ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.608295  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0325 18:06:15.611196  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller" latency=2.624606ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.612580  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.612610  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.612665  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.005448ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.614130  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.314162ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.614424  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0325 18:06:15.615891  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller" latency=1.18408ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.620233  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.868349ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.620521  113876 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
... skipping 6 lines ...
I0325 18:06:15.640222  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin" latency=1.412523ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.661854  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.755347ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.662138  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0325 18:06:15.680413  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=1.45969ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.701149  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.190598ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.701455  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0325 18:06:15.701503  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.701530  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.701610  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.752155ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.713030  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.713068  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.713155  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.255793ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.720390  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user" latency=1.452376ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.741575  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.513826ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.741980  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0325 18:06:15.760663  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" latency=1.683695ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.781414  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.391833ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.781958  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0325 18:06:15.800364  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier" latency=1.400704ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:15.801002  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.801184  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.801268  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.395311ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:15.813291  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.813323  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.813398  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.55053ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.821495  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.639775ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.821832  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0325 18:06:15.840827  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager" latency=1.739844ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.862101  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.185912ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.862448  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0325 18:06:15.880541  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns" latency=1.405658ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.901637  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.901693  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.901775  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.916975ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:15.901882  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.925851ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.902267  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0325 18:06:15.913147  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:15.913190  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:15.913278  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.388353ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.920105  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler" latency=1.267444ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.941333  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.357447ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.941637  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0325 18:06:15.960567  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler" latency=1.648367ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.981289  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.362237ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:15.981689  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0325 18:06:16.000435  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node" latency=1.507249ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.001143  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.001175  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.001244  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.143277ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:16.013027  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.013061  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.013218  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.365082ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.021066  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.130512ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.021330  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0325 18:06:16.040426  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller" latency=1.425782ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.061542  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.59992ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.061947  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0325 18:06:16.080771  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller" latency=1.862182ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.101453  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.101490  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.101562  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.688211ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.102728  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.707782ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.103172  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0325 18:06:16.113042  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.113089  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.113164  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.331164ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.120308  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller" latency=1.402094ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.141119  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.174789ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.141400  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0325 18:06:16.160578  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller" latency=1.603594ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.183140  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.061485ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.183464  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0325 18:06:16.200589  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller" latency=1.615363ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.201836  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.201876  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.201965  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.498518ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.213163  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.213197  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.213285  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.453425ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.221775  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.812814ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.222471  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0325 18:06:16.241762  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller" latency=2.738434ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.269683  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=8.6318ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.270099  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0325 18:06:16.280531  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller" latency=1.402663ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.303498  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.303531  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.303616  113876 httplog.go:90] verb="GET" URI="/healthz" latency=3.664752ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.305359  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.42273ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.305736  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0325 18:06:16.313084  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.313123  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.313218  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.425231ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.320824  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller" latency=1.592968ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.342733  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.648851ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.343181  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0325 18:06:16.361743  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller" latency=1.169866ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.381588  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.591798ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.381936  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0325 18:06:16.400426  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector" latency=1.510886ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.404531  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.404586  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.404690  113876 httplog.go:90] verb="GET" URI="/healthz" latency=4.830031ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.412744  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.412782  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.412871  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.058503ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.421400  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.500271ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.421951  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0325 18:06:16.440576  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler" latency=1.285472ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.461738  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.765701ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.462179  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0325 18:06:16.480695  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller" latency=1.653391ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.509378  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=6.979395ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.509624  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.509657  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.509741  113876 httplog.go:90] verb="GET" URI="/healthz" latency=7.365181ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.509917  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0325 18:06:16.513376  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.513436  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.513552  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.609677ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.522401  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller" latency=2.896672ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.541152  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.166293ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.541453  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0325 18:06:16.560118  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller" latency=1.215781ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.581620  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.69229ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.581972  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0325 18:06:16.603814  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.603853  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.603928  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.301152ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.604997  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder" latency=1.209936ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.612846  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.612879  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.612966  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.156396ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.621309  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.379862ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.621641  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0325 18:06:16.642357  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector" latency=1.310899ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.662430  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.689742ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.662923  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0325 18:06:16.681525  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller" latency=2.56403ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.701890  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.701942  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.701966  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.030866ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.702015  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.944638ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.702252  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0325 18:06:16.715502  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.715537  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.715622  113876 httplog.go:90] verb="GET" URI="/healthz" latency=3.724037ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.720043  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller" latency=1.179405ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.744358  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=5.218372ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.744699  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0325 18:06:16.760383  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller" latency=1.459174ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.781772  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.799011ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.782320  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0325 18:06:16.800433  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller" latency=1.508097ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:16.801740  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.801766  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.801836  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.882595ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:16.812912  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.812971  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.813054  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.267576ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.822418  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.455737ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.822720  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0325 18:06:16.840327  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller" latency=1.388308ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.862072  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.337656ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.862634  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0325 18:06:16.880468  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller" latency=1.505442ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.901851  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.901880  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.901956  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.654169ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:16.902300  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.293697ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.902722  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0325 18:06:16.913070  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:16.913110  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:16.913187  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.337202ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.926050  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller" latency=7.161835ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.942367  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.390931ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.942831  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0325 18:06:16.960359  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller" latency=1.415105ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.981659  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.741132ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:16.981967  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0325 18:06:17.000315  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller" latency=1.414011ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.000958  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.000990  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.001056  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.122643ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:17.014562  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.014601  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.014691  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.318025ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.020960  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.030649ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.021411  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0325 18:06:17.040844  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller" latency=1.922883ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.061183  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.192293ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.061755  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0325 18:06:17.080386  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller" latency=1.460502ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.101244  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.309751ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.101926  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.101955  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.102022  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.121983ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:17.102045  113876 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0325 18:06:17.113174  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.113211  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.113300  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.391882ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.120361  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader" latency=1.392532ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.122606  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.664578ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.142191  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=3.173982ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.142531  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0325 18:06:17.160824  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer" latency=1.764966ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.163172  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.608603ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.181714  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.759325ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.182120  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0325 18:06:17.200650  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider" latency=1.654447ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.201066  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.201124  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.201191  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.378335ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:17.203149  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.482911ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.213177  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.213231  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.213316  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.330321ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.221306  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.384754ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.221912  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0325 18:06:17.242410  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner" latency=3.372971ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.245646  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=2.419046ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.261659  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.679611ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.261979  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0325 18:06:17.280402  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager" latency=1.433076ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.282691  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.848759ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.301784  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.847583ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.302830  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.302865  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.302919  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0325 18:06:17.302938  113876 httplog.go:90] verb="GET" URI="/healthz" latency=3.011674ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:17.314655  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.314700  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.314841  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.853428ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.320271  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler" latency=1.368417ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.322935  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.911103ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.342440  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.425827ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.343230  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0325 18:06:17.360456  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer" latency=1.50494ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.364771  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=3.731456ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.381342  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles" latency=2.299773ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.381628  113876 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0325 18:06:17.400520  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader" latency=1.441672ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.402092  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.402135  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.402225  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.250815ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:17.403147  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.665432ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.413127  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.413166  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.413264  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.385675ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.422346  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=3.256544ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.422752  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0325 18:06:17.443928  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager" latency=4.998594ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.446394  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.606921ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.461543  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.574234ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.461900  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0325 18:06:17.480391  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler" latency=1.435686ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.482629  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.649343ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.501826  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.502007  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.502927  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.612405ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33770": 
I0325 18:06:17.503067  113876 httplog.go:90] verb="GET" URI="/healthz" latency=2.786138ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33938": 
I0325 18:06:17.503697  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0325 18:06:17.512944  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.512991  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.513069  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.267074ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.520521  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer" latency=1.490413ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.523454  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.933705ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.542841  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=3.869114ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.543418  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0325 18:06:17.560807  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider" latency=1.815373ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.564265  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.723239ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.585515  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=6.533113ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.586092  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0325 18:06:17.600912  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner" latency=1.778976ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.602252  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.602296  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.602364  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.607245ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:33770": 
I0325 18:06:17.604026  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.686634ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.613134  113876 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0325 18:06:17.613183  113876 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0325 18:06:17.613266  113876 httplog.go:90] verb="GET" URI="/healthz" latency=1.441167ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.621434  113876 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.384309ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.621780  113876 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0325 18:06:17.640578  113876 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer" latency=1.525027ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33938": 
I0325 18:06:17.643171  113876 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.988098ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format"