This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshawnhanx: Migrate kubelet to use v1 Event API
ResultFAILURE
Tests 1 failed / 3313 succeeded
Started2021-04-08 03:40
Elapsed37m28s
Revisionaf07e4f36759678abb425607ba9ab4e35bde9032
Refs 100600

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodeAffinity 4.83s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodeAffinity$
=== RUN   TestNodeAffinity
I0408 04:13:32.247224  131322 apf_controller.go:195] NewTestableController "Controller" with serverConcurrencyLimit=600, requestWaitLimit=15s, name=Controller, asFieldManager="api-priority-and-fairness-config-consumer-v1"
I0408 04:13:32.247324  131322 apf_controller.go:731] No exempt PriorityLevelConfiguration found, imagining one
I0408 04:13:32.247337  131322 apf_controller.go:731] No catch-all PriorityLevelConfiguration found, imagining one
W0408 04:13:32.247370  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:32.247391  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0408 04:13:32.247472  131322 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0408 04:13:32.247492  131322 instance.go:327] Node port range unspecified. Defaulting to 30000-32767.
I0408 04:13:32.247501  131322 instance.go:283] Using reconciler: 
I0408 04:13:32.249087  131322 instance.go:387] Could not construct pre-rendered responses for ServiceAccountIssuerDiscovery endpoints. Endpoints will not be enabled. Error: empty issuer URL
I0408 04:13:32.249287  131322 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.249506  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.249601  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.250428  131322 store.go:1428] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0408 04:13:32.250487  131322 reflector.go:255] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0408 04:13:32.250486  131322 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.250771  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.250801  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.251374  131322 cacher.go:405] cacher (*core.PodTemplate): initialized
I0408 04:13:32.251399  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.251381  131322 store.go:1428] Monitoring events count at <storage-prefix>//events
I0408 04:13:32.251412  131322 reflector.go:255] Listing and watching *core.Event from storage/cacher.go:/events
I0408 04:13:32.251493  131322 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.251631  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.251671  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.252344  131322 store.go:1428] Monitoring limitranges count at <storage-prefix>//limitranges
I0408 04:13:32.252480  131322 reflector.go:255] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0408 04:13:32.252548  131322 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.252699  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.252733  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.253304  131322 cacher.go:405] cacher (*core.LimitRange): initialized
I0408 04:13:32.253332  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.253610  131322 store.go:1428] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0408 04:13:32.253673  131322 reflector.go:255] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0408 04:13:32.253973  131322 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.254190  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.254221  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.254630  131322 cacher.go:405] cacher (*core.Event): initialized
I0408 04:13:32.254642  131322 cacher.go:405] cacher (*core.ResourceQuota): initialized
I0408 04:13:32.254644  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.254654  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.255335  131322 store.go:1428] Monitoring secrets count at <storage-prefix>//secrets
I0408 04:13:32.255406  131322 reflector.go:255] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0408 04:13:32.255534  131322 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.255688  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.255712  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.256377  131322 store.go:1428] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0408 04:13:32.256470  131322 cacher.go:405] cacher (*core.Secret): initialized
I0408 04:13:32.256475  131322 reflector.go:255] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0408 04:13:32.256486  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.256535  131322 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.256649  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.256666  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.257479  131322 store.go:1428] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0408 04:13:32.257565  131322 reflector.go:255] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0408 04:13:32.257687  131322 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.257824  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.257847  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.258295  131322 cacher.go:405] cacher (*core.PersistentVolume): initialized
I0408 04:13:32.258311  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.258875  131322 store.go:1428] Monitoring configmaps count at <storage-prefix>//configmaps
I0408 04:13:32.258892  131322 cacher.go:405] cacher (*core.PersistentVolumeClaim): initialized
I0408 04:13:32.258903  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.259066  131322 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.259104  131322 reflector.go:255] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0408 04:13:32.259260  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.259281  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.259894  131322 cacher.go:405] cacher (*core.ConfigMap): initialized
I0408 04:13:32.259907  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.260051  131322 store.go:1428] Monitoring namespaces count at <storage-prefix>//namespaces
I0408 04:13:32.260111  131322 reflector.go:255] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0408 04:13:32.260235  131322 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.260408  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.260438  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.261386  131322 store.go:1428] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0408 04:13:32.261524  131322 cacher.go:405] cacher (*core.Namespace): initialized
I0408 04:13:32.261606  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.261623  131322 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.261729  131322 reflector.go:255] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0408 04:13:32.261782  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.261806  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.262644  131322 store.go:1428] Monitoring nodes count at <storage-prefix>//minions
I0408 04:13:32.262714  131322 reflector.go:255] Listing and watching *core.Node from storage/cacher.go:/minions
I0408 04:13:32.262877  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.263077  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.263118  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.263285  131322 cacher.go:405] cacher (*core.Endpoints): initialized
I0408 04:13:32.263559  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.263571  131322 cacher.go:405] cacher (*core.Node): initialized
I0408 04:13:32.263902  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.264108  131322 store.go:1428] Monitoring pods count at <storage-prefix>//pods
I0408 04:13:32.264213  131322 reflector.go:255] Listing and watching *core.Pod from storage/cacher.go:/pods
I0408 04:13:32.264348  131322 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.264625  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.264727  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.265217  131322 cacher.go:405] cacher (*core.Pod): initialized
I0408 04:13:32.265239  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.265503  131322 store.go:1428] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0408 04:13:32.265537  131322 reflector.go:255] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0408 04:13:32.265555  131322 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.265685  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.265713  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.266436  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.266474  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.266596  131322 cacher.go:405] cacher (*core.ServiceAccount): initialized
I0408 04:13:32.266627  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.267273  131322 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.267386  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.267416  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.268083  131322 store.go:1428] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0408 04:13:32.268121  131322 reflector.go:255] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0408 04:13:32.268299  131322 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.268448  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.268474  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.269105  131322 cacher.go:405] cacher (*core.ReplicationController): initialized
I0408 04:13:32.269124  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.269143  131322 store.go:1428] Monitoring services count at <storage-prefix>//services/specs
I0408 04:13:32.269167  131322 rest.go:130] the default service ipfamily for this cluster is: IPv4
I0408 04:13:32.269259  131322 reflector.go:255] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0408 04:13:32.269809  131322 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.269975  131322 cacher.go:405] cacher (*core.Service): initialized
I0408 04:13:32.270053  131322 watch_cache.go:550] Replace watchCache (rev: 83726) 
I0408 04:13:32.270095  131322 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.270862  131322 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.271464  131322 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.272243  131322 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.273066  131322 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.273423  131322 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.273531  131322 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.273749  131322 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.274117  131322 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.274602  131322 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.274795  131322 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.275375  131322 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.275652  131322 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.276182  131322 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.276468  131322 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.277195  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.277366  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.277544  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.277709  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.277932  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.278108  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.278319  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.278963  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.279220  131322 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.279920  131322 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.280568  131322 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.280845  131322 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.281147  131322 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.281761  131322 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.282017  131322 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.282626  131322 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.283249  131322 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.283764  131322 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.284510  131322 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.284772  131322 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.284901  131322 instance.go:586] Skipping disabled API group "internal.apiserver.k8s.io".
I0408 04:13:32.284994  131322 instance.go:607] Enabling API group "authentication.k8s.io".
I0408 04:13:32.285096  131322 instance.go:607] Enabling API group "authorization.k8s.io".
I0408 04:13:32.285283  131322 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.285460  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.285502  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.286368  131322 store.go:1428] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0408 04:13:32.286442  131322 reflector.go:255] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0408 04:13:32.286540  131322 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.286699  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.286736  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.287554  131322 cacher.go:405] cacher (*autoscaling.HorizontalPodAutoscaler): initialized
I0408 04:13:32.287576  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.287978  131322 store.go:1428] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0408 04:13:32.288118  131322 reflector.go:255] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0408 04:13:32.288218  131322 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.288370  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.288506  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.289345  131322 cacher.go:405] cacher (*autoscaling.HorizontalPodAutoscaler): initialized
I0408 04:13:32.289445  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.289615  131322 store.go:1428] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0408 04:13:32.289680  131322 reflector.go:255] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0408 04:13:32.289748  131322 instance.go:607] Enabling API group "autoscaling".
I0408 04:13:32.289944  131322 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.290268  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.290414  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.290674  131322 cacher.go:405] cacher (*autoscaling.HorizontalPodAutoscaler): initialized
I0408 04:13:32.290703  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.291675  131322 store.go:1428] Monitoring jobs.batch count at <storage-prefix>//jobs
I0408 04:13:32.291721  131322 reflector.go:255] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0408 04:13:32.291954  131322 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.292086  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.292110  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.292807  131322 cacher.go:405] cacher (*batch.Job): initialized
I0408 04:13:32.292827  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.292846  131322 store.go:1428] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0408 04:13:32.292889  131322 reflector.go:255] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0408 04:13:32.293093  131322 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.293487  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.293526  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.293742  131322 cacher.go:405] cacher (*batch.CronJob): initialized
I0408 04:13:32.293765  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.294332  131322 store.go:1428] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0408 04:13:32.294383  131322 reflector.go:255] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0408 04:13:32.294479  131322 instance.go:607] Enabling API group "batch".
I0408 04:13:32.294775  131322 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.294926  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.295077  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.295386  131322 cacher.go:405] cacher (*batch.CronJob): initialized
I0408 04:13:32.295406  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.295733  131322 store.go:1428] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0408 04:13:32.295895  131322 reflector.go:255] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0408 04:13:32.295951  131322 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.296088  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.296106  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.297276  131322 cacher.go:405] cacher (*certificates.CertificateSigningRequest): initialized
I0408 04:13:32.297290  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.297331  131322 store.go:1428] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0408 04:13:32.297398  131322 reflector.go:255] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0408 04:13:32.297404  131322 instance.go:607] Enabling API group "certificates.k8s.io".
I0408 04:13:32.297706  131322 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.297868  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.297891  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.298227  131322 cacher.go:405] cacher (*certificates.CertificateSigningRequest): initialized
I0408 04:13:32.298248  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.298979  131322 store.go:1428] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0408 04:13:32.299056  131322 reflector.go:255] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0408 04:13:32.299283  131322 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.299422  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.299436  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.299952  131322 cacher.go:405] cacher (*coordination.Lease): initialized
I0408 04:13:32.299968  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.300132  131322 store.go:1428] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0408 04:13:32.300228  131322 instance.go:607] Enabling API group "coordination.k8s.io".
I0408 04:13:32.300277  131322 reflector.go:255] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0408 04:13:32.300498  131322 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.300652  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.300673  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.301235  131322 cacher.go:405] cacher (*coordination.Lease): initialized
I0408 04:13:32.301248  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.301309  131322 store.go:1428] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I0408 04:13:32.301504  131322 reflector.go:255] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I0408 04:13:32.301570  131322 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.301717  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.301739  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.302319  131322 cacher.go:405] cacher (*discovery.EndpointSlice): initialized
I0408 04:13:32.302331  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.302337  131322 store.go:1428] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I0408 04:13:32.302386  131322 reflector.go:255] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I0408 04:13:32.302397  131322 instance.go:607] Enabling API group "discovery.k8s.io".
I0408 04:13:32.302651  131322 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.302800  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.302819  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.303304  131322 cacher.go:405] cacher (*discovery.EndpointSlice): initialized
I0408 04:13:32.303318  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.303507  131322 store.go:1428] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0408 04:13:32.303563  131322 reflector.go:255] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0408 04:13:32.303565  131322 instance.go:607] Enabling API group "extensions".
I0408 04:13:32.303770  131322 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.303889  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.303907  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.304724  131322 cacher.go:405] cacher (*networking.Ingress): initialized
I0408 04:13:32.304752  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.304898  131322 store.go:1428] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0408 04:13:32.305070  131322 reflector.go:255] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0408 04:13:32.305112  131322 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.305392  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.305509  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.305959  131322 cacher.go:405] cacher (*networking.NetworkPolicy): initialized
I0408 04:13:32.306129  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.306224  131322 store.go:1428] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0408 04:13:32.306426  131322 reflector.go:255] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0408 04:13:32.306415  131322 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.306627  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.306654  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.307116  131322 cacher.go:405] cacher (*networking.Ingress): initialized
I0408 04:13:32.307135  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.307736  131322 store.go:1428] Monitoring ingressclasses.networking.k8s.io count at <storage-prefix>//ingressclasses
I0408 04:13:32.307788  131322 reflector.go:255] Listing and watching *networking.IngressClass from storage/cacher.go:/ingressclasses
I0408 04:13:32.308027  131322 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.308200  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.308221  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.308595  131322 cacher.go:405] cacher (*networking.IngressClass): initialized
I0408 04:13:32.308622  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.308977  131322 store.go:1428] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0408 04:13:32.309194  131322 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.309447  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.309486  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.309581  131322 reflector.go:255] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0408 04:13:32.310288  131322 store.go:1428] Monitoring ingressclasses.networking.k8s.io count at <storage-prefix>//ingressclasses
I0408 04:13:32.310331  131322 cacher.go:405] cacher (*networking.Ingress): initialized
I0408 04:13:32.310341  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.310384  131322 instance.go:607] Enabling API group "networking.k8s.io".
I0408 04:13:32.310586  131322 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.310823  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.310958  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.311307  131322 reflector.go:255] Listing and watching *networking.IngressClass from storage/cacher.go:/ingressclasses
I0408 04:13:32.311713  131322 store.go:1428] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0408 04:13:32.311955  131322 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.312106  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.312277  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.312432  131322 cacher.go:405] cacher (*networking.IngressClass): initialized
I0408 04:13:32.312455  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.312640  131322 reflector.go:255] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0408 04:13:32.313097  131322 store.go:1428] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0408 04:13:32.313137  131322 reflector.go:255] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0408 04:13:32.313161  131322 instance.go:607] Enabling API group "node.k8s.io".
I0408 04:13:32.313350  131322 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.313488  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.313515  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.313966  131322 cacher.go:405] cacher (*node.RuntimeClass): initialized
I0408 04:13:32.313993  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.314133  131322 store.go:1428] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0408 04:13:32.314192  131322 reflector.go:255] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0408 04:13:32.314216  131322 cacher.go:405] cacher (*node.RuntimeClass): initialized
I0408 04:13:32.314244  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.314545  131322 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.314663  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.314681  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.315193  131322 cacher.go:405] cacher (*policy.PodDisruptionBudget): initialized
I0408 04:13:32.315217  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.315359  131322 store.go:1428] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0408 04:13:32.315399  131322 reflector.go:255] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0408 04:13:32.315551  131322 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.315717  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.315738  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.316452  131322 cacher.go:405] cacher (*policy.PodSecurityPolicy): initialized
I0408 04:13:32.316479  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.316833  131322 store.go:1428] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0408 04:13:32.316900  131322 reflector.go:255] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0408 04:13:32.316922  131322 instance.go:607] Enabling API group "policy".
I0408 04:13:32.316975  131322 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.317099  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.317117  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.317690  131322 cacher.go:405] cacher (*policy.PodDisruptionBudget): initialized
I0408 04:13:32.317717  131322 watch_cache.go:550] Replace watchCache (rev: 83727) 
I0408 04:13:32.318485  131322 store.go:1428] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0408 04:13:32.318581  131322 reflector.go:255] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0408 04:13:32.318688  131322 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.318799  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.318816  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.319499  131322 store.go:1428] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0408 04:13:32.319544  131322 reflector.go:255] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0408 04:13:32.319595  131322 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.319771  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.319804  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.320540  131322 store.go:1428] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0408 04:13:32.320709  131322 reflector.go:255] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0408 04:13:32.320824  131322 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.320992  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.321025  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.321912  131322 store.go:1428] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0408 04:13:32.321978  131322 reflector.go:255] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0408 04:13:32.322028  131322 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.322301  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.322323  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.323224  131322 store.go:1428] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0408 04:13:32.323338  131322 reflector.go:255] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0408 04:13:32.323490  131322 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.323661  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.323683  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.324337  131322 store.go:1428] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0408 04:13:32.324402  131322 reflector.go:255] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0408 04:13:32.324390  131322 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.324673  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.324806  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.325545  131322 store.go:1428] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0408 04:13:32.325684  131322 reflector.go:255] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0408 04:13:32.325808  131322 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.325973  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.326004  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.326830  131322 store.go:1428] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0408 04:13:32.326913  131322 reflector.go:255] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0408 04:13:32.326956  131322 instance.go:607] Enabling API group "rbac.authorization.k8s.io".
I0408 04:13:32.329431  131322 cacher.go:405] cacher (*rbac.Role): initialized
I0408 04:13:32.329450  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.329478  131322 cacher.go:405] cacher (*rbac.ClusterRole): initialized
I0408 04:13:32.329491  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.329615  131322 cacher.go:405] cacher (*rbac.ClusterRole): initialized
I0408 04:13:32.329626  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.329720  131322 cacher.go:405] cacher (*rbac.RoleBinding): initialized
I0408 04:13:32.329727  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.329780  131322 cacher.go:405] cacher (*rbac.ClusterRoleBinding): initialized
I0408 04:13:32.329789  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.329910  131322 cacher.go:405] cacher (*rbac.RoleBinding): initialized
I0408 04:13:32.329919  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.330151  131322 cacher.go:405] cacher (*rbac.ClusterRoleBinding): initialized
I0408 04:13:32.330174  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.330970  131322 cacher.go:405] cacher (*rbac.Role): initialized
I0408 04:13:32.330992  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.331402  131322 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.331659  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.331689  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.332408  131322 store.go:1428] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0408 04:13:32.332517  131322 reflector.go:255] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0408 04:13:32.332623  131322 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.332774  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.332794  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.333459  131322 cacher.go:405] cacher (*scheduling.PriorityClass): initialized
I0408 04:13:32.333478  131322 store.go:1428] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0408 04:13:32.333518  131322 instance.go:607] Enabling API group "scheduling.k8s.io".
I0408 04:13:32.333594  131322 reflector.go:255] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0408 04:13:32.333479  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.333838  131322 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.333985  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.334000  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.334677  131322 store.go:1428] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0408 04:13:32.334721  131322 reflector.go:255] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0408 04:13:32.334859  131322 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.334936  131322 cacher.go:405] cacher (*scheduling.PriorityClass): initialized
I0408 04:13:32.334950  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.334984  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.335006  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.335834  131322 store.go:1428] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0408 04:13:32.335917  131322 reflector.go:255] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0408 04:13:32.336044  131322 cacher.go:405] cacher (*storage.StorageClass): initialized
I0408 04:13:32.336060  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.336113  131322 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.336291  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.336317  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.337076  131322 cacher.go:405] cacher (*storage.VolumeAttachment): initialized
I0408 04:13:32.337091  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.337424  131322 store.go:1428] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0408 04:13:32.337557  131322 reflector.go:255] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0408 04:13:32.337616  131322 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.337748  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.337771  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.338470  131322 store.go:1428] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0408 04:13:32.338534  131322 reflector.go:255] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0408 04:13:32.338552  131322 storage_factory.go:285] storing csistoragecapacities.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.338644  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.338659  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.339318  131322 store.go:1428] Monitoring csistoragecapacities.storage.k8s.io count at <storage-prefix>//csistoragecapacities
I0408 04:13:32.339377  131322 reflector.go:255] Listing and watching *storage.CSIStorageCapacity from storage/cacher.go:/csistoragecapacities
I0408 04:13:32.339515  131322 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.339624  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.339642  131322 cacher.go:405] cacher (*storage.CSIDriver): initialized
I0408 04:13:32.339654  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.339655  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.340128  131322 cacher.go:405] cacher (*storage.CSINode): initialized
I0408 04:13:32.340153  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.340210  131322 cacher.go:405] cacher (*storage.CSIStorageCapacity): initialized
I0408 04:13:32.340219  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.340427  131322 store.go:1428] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0408 04:13:32.340505  131322 reflector.go:255] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0408 04:13:32.340597  131322 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.340815  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.340844  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.341509  131322 cacher.go:405] cacher (*storage.StorageClass): initialized
I0408 04:13:32.341634  131322 store.go:1428] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0408 04:13:32.341659  131322 reflector.go:255] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0408 04:13:32.341637  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.341857  131322 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.342141  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.342188  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.342593  131322 cacher.go:405] cacher (*storage.VolumeAttachment): initialized
I0408 04:13:32.342614  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.343050  131322 store.go:1428] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0408 04:13:32.343086  131322 reflector.go:255] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0408 04:13:32.343315  131322 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.343423  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.343439  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.343869  131322 cacher.go:405] cacher (*storage.CSINode): initialized
I0408 04:13:32.343891  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.344338  131322 store.go:1428] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0408 04:13:32.344438  131322 reflector.go:255] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0408 04:13:32.344453  131322 instance.go:607] Enabling API group "storage.k8s.io".
I0408 04:13:32.344668  131322 storage_factory.go:285] storing flowschemas.flowcontrol.apiserver.k8s.io in flowcontrol.apiserver.k8s.io/v1beta1, reading as flowcontrol.apiserver.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.344789  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.344807  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.345424  131322 cacher.go:405] cacher (*storage.CSIDriver): initialized
I0408 04:13:32.345447  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.345466  131322 store.go:1428] Monitoring flowschemas.flowcontrol.apiserver.k8s.io count at <storage-prefix>//flowschemas
I0408 04:13:32.345650  131322 storage_factory.go:285] storing prioritylevelconfigurations.flowcontrol.apiserver.k8s.io in flowcontrol.apiserver.k8s.io/v1beta1, reading as flowcontrol.apiserver.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.345824  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.345687  131322 reflector.go:255] Listing and watching *flowcontrol.FlowSchema from storage/cacher.go:/flowschemas
I0408 04:13:32.345842  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.346510  131322 cacher.go:405] cacher (*flowcontrol.FlowSchema): initialized
I0408 04:13:32.346536  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.346539  131322 store.go:1428] Monitoring prioritylevelconfigurations.flowcontrol.apiserver.k8s.io count at <storage-prefix>//prioritylevelconfigurations
I0408 04:13:32.346597  131322 instance.go:607] Enabling API group "flowcontrol.apiserver.k8s.io".
I0408 04:13:32.346628  131322 reflector.go:255] Listing and watching *flowcontrol.PriorityLevelConfiguration from storage/cacher.go:/prioritylevelconfigurations
I0408 04:13:32.346920  131322 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.347175  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.347208  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.347763  131322 cacher.go:405] cacher (*flowcontrol.PriorityLevelConfiguration): initialized
I0408 04:13:32.347782  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.348450  131322 store.go:1428] Monitoring deployments.apps count at <storage-prefix>//deployments
I0408 04:13:32.348615  131322 reflector.go:255] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0408 04:13:32.348718  131322 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.348873  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.348892  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.349623  131322 store.go:1428] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0408 04:13:32.350270  131322 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.350408  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.350436  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.350656  131322 reflector.go:255] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0408 04:13:32.351280  131322 cacher.go:405] cacher (*apps.Deployment): initialized
I0408 04:13:32.351303  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.351553  131322 store.go:1428] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0408 04:13:32.351682  131322 cacher.go:405] cacher (*apps.StatefulSet): initialized
I0408 04:13:32.351694  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.351742  131322 reflector.go:255] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0408 04:13:32.351859  131322 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.352085  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.352114  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.353390  131322 store.go:1428] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0408 04:13:32.353432  131322 reflector.go:255] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0408 04:13:32.353622  131322 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.353731  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.353749  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.353812  131322 cacher.go:405] cacher (*apps.DaemonSet): initialized
I0408 04:13:32.354125  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.354225  131322 cacher.go:405] cacher (*apps.ReplicaSet): initialized
I0408 04:13:32.354320  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.354489  131322 store.go:1428] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0408 04:13:32.354611  131322 reflector.go:255] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0408 04:13:32.354809  131322 instance.go:607] Enabling API group "apps".
I0408 04:13:32.355029  131322 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.355411  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.355444  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.355646  131322 cacher.go:405] cacher (*apps.ControllerRevision): initialized
I0408 04:13:32.355669  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.356322  131322 store.go:1428] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0408 04:13:32.356405  131322 reflector.go:255] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0408 04:13:32.356576  131322 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.356715  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.356740  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.357302  131322 cacher.go:405] cacher (*admissionregistration.ValidatingWebhookConfiguration): initialized
I0408 04:13:32.357320  131322 watch_cache.go:550] Replace watchCache (rev: 83728) 
I0408 04:13:32.357507  131322 store.go:1428] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0408 04:13:32.357567  131322 reflector.go:255] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0408 04:13:32.357743  131322 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.358071  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.358203  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.358927  131322 store.go:1428] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0408 04:13:32.359223  131322 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.359392  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.359426  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.359613  131322 reflector.go:255] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0408 04:13:32.360336  131322 store.go:1428] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0408 04:13:32.360401  131322 instance.go:607] Enabling API group "admissionregistration.k8s.io".
I0408 04:13:32.360452  131322 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.360682  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.360709  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.360895  131322 reflector.go:255] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0408 04:13:32.361698  131322 store.go:1428] Monitoring events count at <storage-prefix>//events
I0408 04:13:32.361750  131322 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.361847  131322 reflector.go:255] Listing and watching *core.Event from storage/cacher.go:/events
I0408 04:13:32.362038  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:32.362074  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:32.362536  131322 cacher.go:405] cacher (*admissionregistration.ValidatingWebhookConfiguration): initialized
I0408 04:13:32.362551  131322 watch_cache.go:550] Replace watchCache (rev: 83729) 
I0408 04:13:32.362615  131322 cacher.go:405] cacher (*admissionregistration.MutatingWebhookConfiguration): initialized
I0408 04:13:32.362627  131322 watch_cache.go:550] Replace watchCache (rev: 83729) 
I0408 04:13:32.362922  131322 store.go:1428] Monitoring events count at <storage-prefix>//events
I0408 04:13:32.362929  131322 cacher.go:405] cacher (*admissionregistration.MutatingWebhookConfiguration): initialized
I0408 04:13:32.362950  131322 watch_cache.go:550] Replace watchCache (rev: 83729) 
I0408 04:13:32.362952  131322 cacher.go:405] cacher (*core.Event): initialized
I0408 04:13:32.362960  131322 watch_cache.go:550] Replace watchCache (rev: 83729) 
I0408 04:13:32.362983  131322 instance.go:607] Enabling API group "events.k8s.io".
I0408 04:13:32.363271  131322 reflector.go:255] Listing and watching *core.Event from storage/cacher.go:/events
I0408 04:13:32.363316  131322 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.363601  131322 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.363905  131322 cacher.go:405] cacher (*core.Event): initialized
I0408 04:13:32.363943  131322 watch_cache.go:550] Replace watchCache (rev: 83729) 
I0408 04:13:32.364007  131322 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.364166  131322 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.364319  131322 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.364495  131322 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.364859  131322 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.365022  131322 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.365156  131322 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.365508  131322 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.366538  131322 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.366828  131322 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.367655  131322 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.367988  131322 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.369272  131322 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.369517  131322 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.370351  131322 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.370613  131322 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.371247  131322 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.371515  131322 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.372354  131322 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.372650  131322 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.373243  131322 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.373534  131322 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.373771  131322 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.374382  131322 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.374685  131322 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.375042  131322 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.375908  131322 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.376680  131322 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.377622  131322 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.378414  131322 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.379381  131322 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.379734  131322 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.380655  131322 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.381275  131322 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.381526  131322 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.382209  131322 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.382846  131322 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.383632  131322 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.383954  131322 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.384598  131322 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.385163  131322 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
W0408 04:13:32.385230  131322 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0408 04:13:32.385996  131322 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.386275  131322 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.386858  131322 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.387522  131322 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.387801  131322 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.388481  131322 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.388931  131322 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.389524  131322 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.390107  131322 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.390681  131322 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.391180  131322 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.391872  131322 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.392526  131322 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
W0408 04:13:32.392598  131322 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0408 04:13:32.393298  131322 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.393863  131322 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
W0408 04:13:32.393932  131322 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0408 04:13:32.394456  131322 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.394935  131322 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.395433  131322 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.395967  131322 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.396299  131322 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.397004  131322 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.397538  131322 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.398449  131322 storage_factory.go:285] storing csistoragecapacities.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.399143  131322 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.399634  131322 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
W0408 04:13:32.399694  131322 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0408 04:13:32.400248  131322 storage_factory.go:285] storing flowschemas.flowcontrol.apiserver.k8s.io in flowcontrol.apiserver.k8s.io/v1beta1, reading as flowcontrol.apiserver.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.400653  131322 storage_factory.go:285] storing flowschemas.flowcontrol.apiserver.k8s.io in flowcontrol.apiserver.k8s.io/v1beta1, reading as flowcontrol.apiserver.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.401162  131322 storage_factory.go:285] storing prioritylevelconfigurations.flowcontrol.apiserver.k8s.io in flowcontrol.apiserver.k8s.io/v1beta1, reading as flowcontrol.apiserver.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.401491  131322 storage_factory.go:285] storing prioritylevelconfigurations.flowcontrol.apiserver.k8s.io in flowcontrol.apiserver.k8s.io/v1beta1, reading as flowcontrol.apiserver.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
W0408 04:13:32.401576  131322 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
I0408 04:13:32.402291  131322 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.402951  131322 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.403205  131322 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.403833  131322 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.404063  131322 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.404343  131322 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.404927  131322 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.405187  131322 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.405440  131322 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.406102  131322 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.406353  131322 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.406621  131322 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
W0408 04:13:32.406673  131322 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W0408 04:13:32.406679  131322 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I0408 04:13:32.407290  131322 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.407826  131322 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.408497  131322 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.409068  131322 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.409800  131322 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.410453  131322 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c521fca5-6eeb-4c31-8afe-859a7db95e1c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000, HealthcheckTimeout:2000000000, LeaseManagerConfig:etcd3.LeaseManagerConfig{ReuseDurationSeconds:60, MaxObjectCount:1000}}
I0408 04:13:32.413711  131322 apf_controller.go:294] Starting API Priority and Fairness config controller
I0408 04:13:32.413879  131322 reflector.go:219] Starting reflector *v1beta1.FlowSchema (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:32.413909  131322 reflector.go:255] Listing and watching *v1beta1.FlowSchema from k8s.io/client-go/informers/factory.go:134
I0408 04:13:32.413888  131322 reflector.go:219] Starting reflector *v1beta1.PriorityLevelConfiguration (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:32.414032  131322 reflector.go:255] Listing and watching *v1beta1.PriorityLevelConfiguration from k8s.io/client-go/informers/factory.go:134
W0408 04:13:32.414480  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0408 04:13:32.414588  131322 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0408 04:13:32.414608  131322 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0408 04:13:32.414782  131322 reflector.go:219] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/controlplane/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0408 04:13:32.414801  131322 reflector.go:255] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/controlplane/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0408 04:13:32.415652  131322 healthz.go:244] etcd,poststarthook/bootstrap-controller,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/bootstrap-controller failed: not finished
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.415877  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="463.29µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33378" resp=0
I0408 04:13:32.416226  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?limit=500&resourceVersion=0" latency="777.829µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33384" resp=200
I0408 04:13:32.416280  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?limit=500&resourceVersion=0" latency="757.509µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33386" resp=200
W0408 04:13:32.416454  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
W0408 04:13:32.416700  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.416480  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0" latency="1.037604ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33388" resp=200
I0408 04:13:32.417091  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="1.718599ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33380" resp=404
I0408 04:13:32.417339  131322 get.go:260] "Starting watch" path="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas" resourceVersion="83728" labels="" fields="" timeout="5m14s"
I0408 04:13:32.417587  131322 get.go:260] "Starting watch" path="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations" resourceVersion="83728" labels="" fields="" timeout="9m33s"
I0408 04:13:32.417752  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/exempt" latency="2.35959ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
W0408 04:13:32.417752  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
W0408 04:13:32.417914  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
W0408 04:13:32.417931  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.419867  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services" latency="1.235205ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33388" resp=200
I0408 04:13:32.420389  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="2.081068ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
W0408 04:13:32.420564  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.420665  131322 storage_flowcontrol.go:189] Created suggested FlowSchema system-nodes
I0408 04:13:32.425336  131322 get.go:260] "Starting watch" path="/api/v1/namespaces/kube-system/configmaps" resourceVersion="83726" labels="" fields="" timeout="9m49s"
I0408 04:13:32.425499  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="4.363179ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
W0408 04:13:32.425833  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.425948  131322 storage_flowcontrol.go:189] Created suggested FlowSchema system-leader-election
I0408 04:13:32.426500  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services" latency="1.148765ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33388" resp=200
I0408 04:13:32.427991  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="1.612013ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
W0408 04:13:32.428371  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.428448  131322 storage_flowcontrol.go:189] Created suggested FlowSchema workload-leader-election
I0408 04:13:32.431869  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="3.157775ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
W0408 04:13:32.432066  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.432870  131322 storage_flowcontrol.go:189] Created suggested FlowSchema kube-controller-manager
I0408 04:13:32.433325  131322 shared_informer.go:270] caches populated
I0408 04:13:32.433442  131322 shared_informer.go:270] caches populated
I0408 04:13:32.433547  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.433914  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.087036ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:32.435228  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services" latency="1.593973ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33392" resp=200
I0408 04:13:32.435294  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services" latency="1.19178ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
I0408 04:13:32.435314  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="2.632393ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33388" resp=404
I0408 04:13:32.436128  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="2.378944ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33390" resp=201
W0408 04:13:32.436413  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.436502  131322 storage_flowcontrol.go:189] Created suggested FlowSchema kube-scheduler
I0408 04:13:32.437353  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces" latency="1.512864ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
I0408 04:13:32.438975  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="2.186861ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33390" resp=201
W0408 04:13:32.439160  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.439369  131322 storage_flowcontrol.go:189] Created suggested FlowSchema kube-system-service-accounts
I0408 04:13:32.441805  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-public" latency="4.077111ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=404
I0408 04:13:32.441919  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="2.261523ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33390" resp=201
W0408 04:13:32.442236  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.442330  131322 storage_flowcontrol.go:189] Created suggested FlowSchema service-accounts
I0408 04:13:32.443829  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces" latency="1.614063ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
I0408 04:13:32.445316  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="2.573218ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33390" resp=201
W0408 04:13:32.445467  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.445547  131322 storage_flowcontrol.go:189] Created suggested FlowSchema global-default
I0408 04:13:32.446223  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency="1.864177ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=404
I0408 04:13:32.447571  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?fieldManager=api-priority-and-fairness-config-producer-v1" latency="1.731602ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33390" resp=201
W0408 04:13:32.447720  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.447800  131322 storage_flowcontrol.go:200] Created suggested PriorityLevelConfiguration system
I0408 04:13:32.448278  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces" latency="1.51216ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
I0408 04:13:32.449679  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?fieldManager=api-priority-and-fairness-config-producer-v1" latency="1.638292ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33390" resp=201
W0408 04:13:32.449992  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.450056  131322 storage_flowcontrol.go:200] Created suggested PriorityLevelConfiguration leader-election
I0408 04:13:32.453292  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?fieldManager=api-priority-and-fairness-config-producer-v1" latency="2.973768ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
W0408 04:13:32.453483  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.453572  131322 storage_flowcontrol.go:200] Created suggested PriorityLevelConfiguration workload-high
I0408 04:13:32.455514  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?fieldManager=api-priority-and-fairness-config-producer-v1" latency="1.619149ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
W0408 04:13:32.455878  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.455984  131322 storage_flowcontrol.go:200] Created suggested PriorityLevelConfiguration workload-low
I0408 04:13:32.458052  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?fieldManager=api-priority-and-fairness-config-producer-v1" latency="1.813732ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
W0408 04:13:32.458252  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.458312  131322 storage_flowcontrol.go:200] Created suggested PriorityLevelConfiguration global-default
I0408 04:13:32.459750  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/exempt" latency="1.007141ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=404
W0408 04:13:32.459921  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.461963  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="1.608696ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
W0408 04:13:32.462144  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.462252  131322 storage_flowcontrol.go:234] Created mandatory FlowSchema exempt
I0408 04:13:32.463489  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/catch-all" latency="974.301µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=404
W0408 04:13:32.463815  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.465795  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?fieldManager=api-priority-and-fairness-config-producer-v1" latency="1.644087ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
W0408 04:13:32.466069  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.466169  131322 storage_flowcontrol.go:234] Created mandatory FlowSchema catch-all
I0408 04:13:32.469230  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/catch-all" latency="2.764011ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=404
W0408 04:13:32.469485  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.471481  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?fieldManager=api-priority-and-fairness-config-producer-v1" latency="1.507182ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
W0408 04:13:32.471710  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.471802  131322 storage_flowcontrol.go:264] Created mandatory PriorityLevelConfiguration catch-all
I0408 04:13:32.473390  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/exempt" latency="1.128901ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=404
W0408 04:13:32.473573  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.475628  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?fieldManager=api-priority-and-fairness-config-producer-v1" latency="1.653892ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
W0408 04:13:32.475918  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.476021  131322 storage_flowcontrol.go:264] Created mandatory PriorityLevelConfiguration exempt
I0408 04:13:32.514361  131322 shared_informer.go:270] caches populated
I0408 04:13:32.514434  131322 apf_controller.go:299] Running API Priority and Fairness config worker
I0408 04:13:32.514946  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"system\" and it exists"} to FlowSchema system-nodes, which had ResourceVersion=83731, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.515290  131322 shared_informer.go:270] caches populated
I0408 04:13:32.515321  131322 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I0408 04:13:32.516975  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.517200  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="428.757µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:32.518035  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-nodes/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="2.606447ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.518206  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.518318  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"leader-election\" and it exists"} to FlowSchema workload-leader-election, which had ResourceVersion=83734, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.520650  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/workload-leader-election/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="1.938519ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.520837  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.520952  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"workload-high\" and it exists"} to FlowSchema kube-controller-manager, which had ResourceVersion=83736, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.524190  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/kube-controller-manager/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="2.913209ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.524429  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.524605  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"workload-high\" and it exists"} to FlowSchema kube-scheduler, which had ResourceVersion=83737, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.527131  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/kube-scheduler/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="2.222782ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.527340  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.527444  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"global-default\" and it exists"} to FlowSchema global-default, which had ResourceVersion=83743, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.529741  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/global-default/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="2.035379ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.530064  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.530212  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"exempt\" and it exists"} to FlowSchema exempt, which had ResourceVersion=83750, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.532989  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/exempt/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="2.354077ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.533196  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.533298  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"leader-election\" and it exists"} to FlowSchema system-leader-election, which had ResourceVersion=83733, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.534597  131322 shared_informer.go:270] caches populated
I0408 04:13:32.534626  131322 shared_informer.go:270] caches populated
I0408 04:13:32.534655  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.534739  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="390.427µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:32.535543  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-leader-election/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="1.975969ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.536459  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.536674  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"workload-high\" and it exists"} to FlowSchema kube-system-service-accounts, which had ResourceVersion=83740, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.539471  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/kube-system-service-accounts/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="2.443933ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.539652  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.539783  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"workload-low\" and it exists"} to FlowSchema service-accounts, which had ResourceVersion=83741, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.543265  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/service-accounts/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="3.168351ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.543516  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.543646  131322 apf_controller.go:421] Controller writing Condition {"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"} to FlowSchema catch-all, which had ResourceVersion=83751, because its previous value was {"type":"Dangling","lastTransitionTime":null}
I0408 04:13:32.546417  131322 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/catch-all/status?fieldManager=api-priority-and-fairness-config-consumer-v1" latency="2.462973ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=200
W0408 04:13:32.546653  131322 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
I0408 04:13:32.616827  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.616948  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="448.711µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:32.635174  131322 shared_informer.go:270] caches populated
I0408 04:13:32.635204  131322 shared_informer.go:270] caches populated
I0408 04:13:32.635228  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.635305  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="470.038µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:32.716924  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.717028  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="385.974µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:32.735304  131322 shared_informer.go:270] caches populated
I0408 04:13:32.735334  131322 shared_informer.go:270] caches populated
I0408 04:13:32.735373  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.735479  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="460.06µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:32.816912  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.817032  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="430.349µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:32.835343  131322 shared_informer.go:270] caches populated
I0408 04:13:32.835387  131322 shared_informer.go:270] caches populated
I0408 04:13:32.835422  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.835537  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="438.138µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:32.917244  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.917391  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="442.62µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:32.935247  131322 shared_informer.go:270] caches populated
I0408 04:13:32.935273  131322 shared_informer.go:270] caches populated
I0408 04:13:32.935303  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:32.935366  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="392.585µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:33.017197  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:33.017313  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="418.339µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:33.035528  131322 shared_informer.go:270] caches populated
I0408 04:13:33.035564  131322 shared_informer.go:270] caches populated
I0408 04:13:33.035589  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:33.035673  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="357.79µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:33.117889  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:33.117991  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="505.952µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:33.135544  131322 shared_informer.go:270] caches populated
I0408 04:13:33.135576  131322 shared_informer.go:270] caches populated
I0408 04:13:33.135601  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:33.135689  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="368.275µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:33.217087  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:33.217235  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="506.402µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:33.234786  131322 shared_informer.go:270] caches populated
I0408 04:13:33.234813  131322 shared_informer.go:270] caches populated
I0408 04:13:33.234847  131322 healthz.go:244] etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:33.234921  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="358.191µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:33.247574  131322 client.go:360] parsed scheme: "endpoint"
I0408 04:13:33.247643  131322 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0408 04:13:33.319676  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:33.319859  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.720577ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:33.336434  131322 shared_informer.go:270] caches populated
I0408 04:13:33.336463  131322 shared_informer.go:270] caches populated
I0408 04:13:33.336498  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:33.336620  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.353665ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=0
I0408 04:13:33.416223  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical" latency="1.661954ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=404
I0408 04:13:33.416305  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.298518ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=200
I0408 04:13:33.417488  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0408 04:13:33.417588  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="856.714µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:33.417962  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.172991ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:33.419479  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency="1.090746ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.419843  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency="2.835688ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=201
I0408 04:13:33.420103  131322 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0408 04:13:33.421172  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency="1.276867ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.421262  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical" latency="915.264µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33394" resp=404
I0408 04:13:33.422537  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency="978.039µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.422920  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency="1.273695ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.423242  131322 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0408 04:13:33.423264  131322 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0408 04:13:33.424444  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency="1.442972ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.425979  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency="1.096203ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.427698  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency="1.205653ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.434211  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency="6.122093ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.435770  131322 shared_informer.go:270] caches populated
I0408 04:13:33.435795  131322 shared_informer.go:270] caches populated
I0408 04:13:33.435839  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.435919  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.471318ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:33.435944  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin" latency="1.086847ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.439871  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.059429ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.440308  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0408 04:13:33.442906  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery" latency="2.269621ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.445390  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.88436ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.445774  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0408 04:13:33.447510  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:monitoring" latency="1.347691ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.449844  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.812521ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.450085  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:monitoring
I0408 04:13:33.451305  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user" latency="874.884µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.453882  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.956366ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.454230  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0408 04:13:33.455486  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer" latency="1.00237ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.457946  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.947154ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.458172  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0408 04:13:33.459437  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency="994.028µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.461593  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.656217ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.461886  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/admin
I0408 04:13:33.462929  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency="808.372µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.465086  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.620215ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.465335  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/edit
I0408 04:13:33.467803  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency="2.195297ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.470273  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.749137ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.470549  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/view
I0408 04:13:33.471835  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency="1.060825ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.474140  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.712967ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.474358  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0408 04:13:33.475457  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency="873.109µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.477866  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.886808ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.478140  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0408 04:13:33.480693  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency="2.308343ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.483009  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.7918ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.483343  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0408 04:13:33.484649  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster" latency="993.219µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.486640  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.48102ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.486895  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0408 04:13:33.488238  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node" latency="904.166µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.490709  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.905279ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.491019  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:node
I0408 04:13:33.493130  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector" latency="1.85528ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.495329  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.74668ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.495563  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0408 04:13:33.496802  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin" latency="993.514µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.499641  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.371912ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.499978  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0408 04:13:33.502337  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper" latency="2.024086ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.504538  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.785058ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.504751  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0408 04:13:33.506818  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator" latency="1.743139ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.509486  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.067399ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.509719  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0408 04:13:33.510944  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator" latency="977.536µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.513185  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.720878ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.513398  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0408 04:13:33.514429  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager" latency="803.609µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.517041  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.967183ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:33.517287  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0408 04:13:33.519593  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.519798  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="3.074581ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:33.519839  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns" latency="2.34438ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:33.521997  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.648431ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.522360  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0408 04:13:33.523502  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner" latency="907.77µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.525764  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.667171ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.526166  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0408 04:13:33.527444  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient" latency="983.201µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.529704  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.740128ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.529923  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0408 04:13:33.530969  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" latency="830.707µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.533141  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.713063ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.533376  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0408 04:13:33.538434  131322 shared_informer.go:270] caches populated
I0408 04:13:33.538610  131322 shared_informer.go:270] caches populated
I0408 04:13:33.538646  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.538719  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="4.327445ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:33.538849  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler" latency="5.214295ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.541063  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.720049ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.541282  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0408 04:13:33.544062  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:legacy-unknown-approver" latency="2.621353ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.546429  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.766298ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.546741  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver
I0408 04:13:33.547833  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kubelet-serving-approver" latency="859.164µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.549816  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.60774ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.550073  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver
I0408 04:13:33.551232  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver" latency="909.002µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.553182  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.553199ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.553471  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-approver
I0408 04:13:33.554620  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver" latency="900.699µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.556597  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.487272ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.557091  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
I0408 04:13:33.561036  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:service-account-issuer-discovery" latency="3.662156ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.563630  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.008147ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.563928  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:service-account-issuer-discovery
I0408 04:13:33.565484  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier" latency="1.198552ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.567931  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.666643ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.568236  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0408 04:13:33.569614  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler" latency="1.007767ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.572648  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.371817ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.572981  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0408 04:13:33.575150  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller" latency="1.855778ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.577887  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.200259ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.578178  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0408 04:13:33.579650  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller" latency="1.201662ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.582051  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.763937ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.582379  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0408 04:13:33.583551  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller" latency="906.572µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.586105  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.000426ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.586328  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0408 04:13:33.587524  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller" latency="984.998µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.589948  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.910423ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.590244  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0408 04:13:33.592796  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller" latency="2.299509ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.595422  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.986837ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.595777  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0408 04:13:33.597451  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller" latency="1.375026ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.599819  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.819239ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.600244  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0408 04:13:33.601606  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller" latency="1.06054ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.603685  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.652001ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.603941  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0408 04:13:33.606851  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller" latency="2.442703ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.609250  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.846067ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.609663  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0408 04:13:33.610862  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslicemirroring-controller" latency="913.502µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.613462  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.16778ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.613696  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller
I0408 04:13:33.614973  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller" latency="1.041016ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.617404  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.82133ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.617644  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0408 04:13:33.618207  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.618299  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.878244ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:33.619163  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ephemeral-volume-controller" latency="956.328µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.621626  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.895945ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.621904  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller
I0408 04:13:33.624913  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector" latency="2.750704ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.627398  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.910426ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.627667  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0408 04:13:33.629044  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler" latency="968.066µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.631538  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.835493ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.631820  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0408 04:13:33.633119  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller" latency="1.027362ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.635519  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.837517ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.635905  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0408 04:13:33.636444  131322 shared_informer.go:270] caches populated
I0408 04:13:33.636463  131322 shared_informer.go:270] caches populated
I0408 04:13:33.636490  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.636576  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="2.18589ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:33.637251  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller" latency="839.77µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.644903  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="7.165797ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.645237  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0408 04:13:33.646603  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller" latency="906.3µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.650384  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.28876ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.650693  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0408 04:13:33.652179  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder" latency="1.046337ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.654419  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.666843ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.654710  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0408 04:13:33.655849  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector" latency="869.334µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.659570  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.763097ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.659803  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0408 04:13:33.663356  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller" latency="3.282291ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.665729  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.866858ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.665977  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0408 04:13:33.667332  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller" latency="933.595µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.670233  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.437857ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.670596  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0408 04:13:33.671690  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller" latency="868.29µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.677841  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.071803ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.678243  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0408 04:13:33.696565  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller" latency="1.289747ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.717896  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.226269ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.718224  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0408 04:13:33.719119  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.719230  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.794581ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:33.736750  131322 shared_informer.go:270] caches populated
I0408 04:13:33.736791  131322 shared_informer.go:270] caches populated
I0408 04:13:33.736820  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.736920  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.377717ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:33.736923  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller" latency="1.21483ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.757373  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.32017ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.757722  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0408 04:13:33.776830  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller" latency="1.56485ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.798244  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.34898ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.798504  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0408 04:13:33.817139  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller" latency="1.521602ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.817696  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.817828  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.205104ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:33.835951  131322 shared_informer.go:270] caches populated
I0408 04:13:33.835978  131322 shared_informer.go:270] caches populated
I0408 04:13:33.836037  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.836154  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.310297ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:33.836963  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.003671ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.837404  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0408 04:13:33.857367  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller" latency="1.484823ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.878220  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.204747ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.878545  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0408 04:13:33.897755  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller" latency="1.522796ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.918058  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.918305  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.49156ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:33.918403  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.463195ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.918703  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0408 04:13:33.935860  131322 shared_informer.go:270] caches populated
I0408 04:13:33.935891  131322 shared_informer.go:270] caches populated
I0408 04:13:33.935939  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:33.936052  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.532535ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:33.936056  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller" latency="1.114546ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.959683  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="4.702821ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.960071  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0408 04:13:33.976441  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller" latency="1.317289ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:33.998544  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.787423ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:33.998848  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0408 04:13:34.017126  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-after-finished-controller" latency="1.341767ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.017314  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.017412  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="954.405µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.036307  131322 shared_informer.go:270] caches populated
I0408 04:13:34.036339  131322 shared_informer.go:270] caches populated
I0408 04:13:34.036372  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.036538  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.772389ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.036869  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.905823ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.037102  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller
I0408 04:13:34.056776  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:root-ca-cert-publisher" latency="1.495166ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.077773  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.324308ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.078438  131322 storage_rbac.go:236] created clusterrole.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher
I0408 04:13:34.097189  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin" latency="1.846406ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.116984  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.072162ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.117278  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0408 04:13:34.121172  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.121562  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="3.933592ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:34.135969  131322 shared_informer.go:270] caches populated
I0408 04:13:34.136133  131322 shared_informer.go:270] caches populated
I0408 04:13:34.136180  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.136283  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.431457ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:34.136289  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:monitoring" latency="1.36694ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.157559  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.225526ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.157896  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:monitoring
I0408 04:13:34.177361  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency="1.551671ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.198783  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.462414ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.199148  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0408 04:13:34.217672  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user" latency="1.670263ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.217779  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.217842  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.009866ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:34.236810  131322 shared_informer.go:270] caches populated
I0408 04:13:34.236827  131322 shared_informer.go:270] caches populated
I0408 04:13:34.236878  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.237125  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.151363ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.237137  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="2.464192ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:34.237768  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0408 04:13:34.256724  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" latency="1.308607ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.280447  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="4.108356ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.280843  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0408 04:13:34.301583  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier" latency="2.115238ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.316775  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.823015ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.317040  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0408 04:13:34.318031  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.318140  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.281041ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:34.336108  131322 shared_informer.go:270] caches populated
I0408 04:13:34.336317  131322 shared_informer.go:270] caches populated
I0408 04:13:34.336363  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.336478  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.488254ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.336565  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager" latency="1.583808ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.357663  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.551019ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.357966  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0408 04:13:34.376426  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns" latency="1.468657ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.398095  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.363412ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.398460  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0408 04:13:34.416480  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler" latency="1.481776ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.417279  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.417386  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="806.694µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:34.436213  131322 shared_informer.go:270] caches populated
I0408 04:13:34.436243  131322 shared_informer.go:270] caches populated
I0408 04:13:34.436303  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.436461  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.505382ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:34.437188  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.206744ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.437593  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0408 04:13:34.457560  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler" latency="1.458808ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.478413  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.445514ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.478674  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0408 04:13:34.497222  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node" latency="1.574087ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.517539  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.287036ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.517861  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0408 04:13:34.518407  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.518526  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.814091ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:34.536470  131322 shared_informer.go:270] caches populated
I0408 04:13:34.536520  131322 shared_informer.go:270] caches populated
I0408 04:13:34.536563  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.536659  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.542746ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.536798  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:service-account-issuer-discovery" latency="1.698427ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.557876  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.111869ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.558300  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:service-account-issuer-discovery
I0408 04:13:34.576719  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller" latency="1.459442ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.602838  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="7.672681ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.603109  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0408 04:13:34.616830  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller" latency="1.576825ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.617895  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.617997  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.160546ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.636331  131322 shared_informer.go:270] caches populated
I0408 04:13:34.636515  131322 shared_informer.go:270] caches populated
I0408 04:13:34.636681  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.636904  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.986333ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.637418  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.427266ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.637788  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0408 04:13:34.656321  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller" latency="1.289614ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.677558  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.429639ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.677822  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0408 04:13:34.696794  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller" latency="1.745377ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.716842  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.958813ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.717321  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0408 04:13:34.717755  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.717878  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.182103ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.735977  131322 shared_informer.go:270] caches populated
I0408 04:13:34.736005  131322 shared_informer.go:270] caches populated
I0408 04:13:34.736035  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.736153  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.497441ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.736315  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller" latency="1.446686ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.757501  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.289034ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.757827  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0408 04:13:34.777188  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller" latency="1.47808ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.797528  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.041049ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.798026  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0408 04:13:34.825717  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.826054  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller" latency="10.090037ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.826215  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="9.193375ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.836206  131322 shared_informer.go:270] caches populated
I0408 04:13:34.836235  131322 shared_informer.go:270] caches populated
I0408 04:13:34.836285  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.836402  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.343881ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.837216  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.023034ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.837487  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0408 04:13:34.856941  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller" latency="1.914696ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.877817  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.707293ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.878182  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0408 04:13:34.896740  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslicemirroring-controller" latency="1.623498ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.917642  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.54987ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:34.917939  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller
I0408 04:13:34.919305  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.919444  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.808827ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.936016  131322 shared_informer.go:270] caches populated
I0408 04:13:34.936046  131322 shared_informer.go:270] caches populated
I0408 04:13:34.936037  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller" latency="1.144592ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:34.936077  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:34.936330  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.576226ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:34.957711  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.132196ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.958012  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0408 04:13:34.976589  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ephemeral-volume-controller" latency="1.567607ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:34.997539  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.295759ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:34.997826  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller
I0408 04:13:35.017583  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.017711  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="999.758µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:35.017747  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector" latency="1.789855ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.036087  131322 shared_informer.go:270] caches populated
I0408 04:13:35.036120  131322 shared_informer.go:270] caches populated
I0408 04:13:35.036150  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.036297  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.498062ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.036990  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.089618ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.037369  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0408 04:13:35.057632  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler" latency="1.363885ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.077827  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.322674ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.078197  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0408 04:13:35.096854  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller" latency="1.728465ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.117522  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.148394ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.117779  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0408 04:13:35.118213  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.118318  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.493847ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.136220  131322 shared_informer.go:270] caches populated
I0408 04:13:35.136249  131322 shared_informer.go:270] caches populated
I0408 04:13:35.136299  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.136394  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.502085ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.136394  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller" latency="1.394512ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.157238  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.242671ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.157532  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0408 04:13:35.177467  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller" latency="1.507291ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.197558  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.571946ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.197849  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0408 04:13:35.217621  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.217872  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.108627ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.217671  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder" latency="1.618892ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.238437  131322 shared_informer.go:270] caches populated
I0408 04:13:35.238466  131322 shared_informer.go:270] caches populated
I0408 04:13:35.238496  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.238622  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="3.047512ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.239218  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.662355ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.239487  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0408 04:13:35.256850  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector" latency="1.425246ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.278228  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.52662ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.278549  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0408 04:13:35.297263  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller" latency="1.314097ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.317215  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.191116ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.317463  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0408 04:13:35.318116  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.318309  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.813929ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.335965  131322 shared_informer.go:270] caches populated
I0408 04:13:35.335997  131322 shared_informer.go:270] caches populated
I0408 04:13:35.336026  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.336117  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.290862ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.336132  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller" latency="1.245971ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.358403  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.340699ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.358868  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0408 04:13:35.376673  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller" latency="1.23817ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.402609  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.820128ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.402916  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0408 04:13:35.416082  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller" latency="1.162852ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.418008  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.418143  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="978.976µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.436179  131322 shared_informer.go:270] caches populated
I0408 04:13:35.436203  131322 shared_informer.go:270] caches populated
I0408 04:13:35.436279  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.436398  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.141654ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:35.437178  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.951059ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.437430  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0408 04:13:35.458274  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller" latency="2.187091ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.479571  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.251924ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.480126  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0408 04:13:35.497366  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller" latency="1.308376ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.531101  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.531216  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.048462ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33382" resp=0
I0408 04:13:35.531383  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.484647ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.531742  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0408 04:13:35.535432  131322 shared_informer.go:270] caches populated
I0408 04:13:35.535458  131322 shared_informer.go:270] caches populated
I0408 04:13:35.535498  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.535600  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.001402ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.535938  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller" latency="869.076µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.562661  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.232567ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.563089  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0408 04:13:35.580003  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller" latency="1.290497ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=404
I0408 04:13:35.634025  131322 request.go:600] Waited for 53.551753ms due to client-side throttling, not priority and fairness, request: POST:http://127.0.0.1:37285/apis/rbac.authorization.k8s.io/v1/clusterrolebindings
I0408 04:13:35.640108  131322 shared_informer.go:270] caches populated
I0408 04:13:35.640136  131322 shared_informer.go:270] caches populated
I0408 04:13:35.640166  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.640108  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.640289  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="4.205533ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:35.640568  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="5.695377ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:35.643931  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="9.531088ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33382" resp=201
I0408 04:13:35.644246  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0408 04:13:35.645485  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller" latency="970.232µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.647398  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.515824ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.647619  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0408 04:13:35.656453  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller" latency="1.167187ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.686765  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.060376ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.687414  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0408 04:13:35.697836  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller" latency="1.20131ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.717625  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.154877ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.717911  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0408 04:13:35.718694  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.718814  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.313ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:35.735711  131322 shared_informer.go:270] caches populated
I0408 04:13:35.735733  131322 shared_informer.go:270] caches populated
I0408 04:13:35.735761  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.735834  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.180261ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:35.736235  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-after-finished-controller" latency="1.023209ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.763818  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.737585ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.764091  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller
I0408 04:13:35.782541  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:root-ca-cert-publisher" latency="6.797996ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.799162  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.194328ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.799647  131322 storage_rbac.go:266] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher
I0408 04:13:35.816966  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader" latency="1.2362ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.817389  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.817504  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.025993ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:35.836480  131322 shared_informer.go:270] caches populated
I0408 04:13:35.836510  131322 shared_informer.go:270] caches populated
I0408 04:13:35.836538  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.836639  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.593042ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:35.836722  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.651389ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:35.862814  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="6.695459ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.863148  131322 storage_rbac.go:299] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0408 04:13:35.877110  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer" latency="1.190866ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.897389  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.476027ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:35.917273  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="2.24701ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.917632  131322 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0408 04:13:35.918164  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.918263  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.702166ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:35.936105  131322 shared_informer.go:270] caches populated
I0408 04:13:35.936133  131322 shared_informer.go:270] caches populated
I0408 04:13:35.936169  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:35.936288  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.256815ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:35.936450  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider" latency="1.422958ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:35.956933  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.280916ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:35.977518  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="2.31315ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:35.977825  131322 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0408 04:13:35.997128  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner" latency="1.13568ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:36.017022  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.538907ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:36.017378  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.017494  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="922.918µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:36.036889  131322 shared_informer.go:270] caches populated
I0408 04:13:36.036915  131322 shared_informer.go:270] caches populated
I0408 04:13:36.036944  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.037145  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.492329ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:36.037873  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="2.306823ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=201
I0408 04:13:36.038182  131322 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0408 04:13:36.057213  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager" latency="1.425252ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=404
I0408 04:13:36.077403  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.785351ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=200
I0408 04:13:36.097907  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="2.22717ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=201
I0408 04:13:36.098341  131322 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0408 04:13:36.116940  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler" latency="1.606307ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=404
I0408 04:13:36.117377  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.117472  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="900.44µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:36.135551  131322 shared_informer.go:270] caches populated
I0408 04:13:36.135582  131322 shared_informer.go:270] caches populated
I0408 04:13:36.135612  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.135698  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.210374ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:36.136340  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.222501ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=200
I0408 04:13:36.157796  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="2.464384ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=201
I0408 04:13:36.158071  131322 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0408 04:13:36.176796  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer" latency="1.74741ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=404
I0408 04:13:36.196814  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-public" latency="1.79068ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=200
I0408 04:13:36.217520  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles" latency="2.119775ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=201
I0408 04:13:36.217784  131322 storage_rbac.go:299] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0408 04:13:36.218867  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.218996  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.483093ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:36.235864  131322 shared_informer.go:270] caches populated
I0408 04:13:36.235892  131322 shared_informer.go:270] caches populated
I0408 04:13:36.235921  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.236019  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.199805ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:36.236524  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader" latency="1.59046ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=404
I0408 04:13:36.256666  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.717807ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:36.277829  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="2.486372ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:36.278134  131322 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0408 04:13:36.297431  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager" latency="1.519575ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:36.317201  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.672569ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:36.317957  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.318069  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.133613ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:36.336646  131322 shared_informer.go:270] caches populated
I0408 04:13:36.336672  131322 shared_informer.go:270] caches populated
I0408 04:13:36.336719  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.336828  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.312306ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:36.337549  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="2.047499ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:36.337836  131322 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0408 04:13:36.357222  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler" latency="1.486906ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:36.377375  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="2.20518ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:36.398490  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="2.439021ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:36.398819  131322 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0408 04:13:36.417159  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer" latency="1.72162ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:36.417956  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.418059  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.233342ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:36.435975  131322 shared_informer.go:270] caches populated
I0408 04:13:36.436005  131322 shared_informer.go:270] caches populated
I0408 04:13:36.436054  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.436168  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.453519ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:36.436381  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.347236ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:36.457517  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="2.418757ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:36.457787  131322 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0408 04:13:36.477152  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider" latency="1.427992ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:36.497069  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.577553ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:36.517471  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="2.166872ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=201
I0408 04:13:36.517812  131322 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0408 04:13:36.518418  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.518601  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.004017ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:36.536483  131322 shared_informer.go:270] caches populated
I0408 04:13:36.536509  131322 shared_informer.go:270] caches populated
I0408 04:13:36.536556  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.536595  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner" latency="1.265038ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=404
I0408 04:13:36.536663  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.412866ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:36.566967  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.39028ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=200
I0408 04:13:36.577148  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="2.286927ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=201
I0408 04:13:36.577447  131322 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0408 04:13:36.596497  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer" latency="1.376568ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=404
I0408 04:13:36.616537  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-public" latency="1.567982ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=200
I0408 04:13:36.617467  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: healthz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.617575  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.091085ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:36.636608  131322 shared_informer.go:270] caches populated
I0408 04:13:36.636638  131322 shared_informer.go:270] caches populated
I0408 04:13:36.636782  131322 healthz.go:244] poststarthook/rbac/bootstrap-roles check failed: readyz
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0408 04:13:36.636928  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.570079ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:36.637592  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings" latency="2.233948ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=201
I0408 04:13:36.637906  131322 storage_rbac.go:331] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0408 04:13:36.718075  131322 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.165317ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:33540" resp=200
W0408 04:13:36.718830  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0408 04:13:36.718953  131322 factory.go:194] "Creating scheduler from algorithm provider" algorithmProvider="DefaultProvider"
W0408 04:13:36.719224  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719263  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719286  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719295  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719384  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719402  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719414  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719424  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719686  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719732  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719785  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 04:13:36.719795  131322 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0408 04:13:36.720373  131322 reflector.go:219] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720508  131322 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720507  131322 reflector.go:219] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720536  131322 reflector.go:255] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720456  131322 reflector.go:219] Starting reflector *v1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720568  131322 reflector.go:255] Listing and watching *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720579  131322 reflector.go:219] Starting reflector *v1.CSIDriver (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720589  131322 reflector.go:255] Listing and watching *v1.CSIDriver from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720637  131322 reflector.go:219] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720651  131322 reflector.go:255] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720686  131322 reflector.go:219] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720703  131322 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720713  131322 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720516  131322 reflector.go:255] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720534  131322 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720775  131322 reflector.go:255] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720853  131322 reflector.go:219] Starting reflector *v1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720869  131322 reflector.go:255] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720424  131322 reflector.go:219] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720892  131322 reflector.go:255] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720529  131322 reflector.go:255] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720707  131322 reflector.go:255] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.720989  131322 reflector.go:219] Starting reflector *v1beta1.CSIStorageCapacity (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.721001  131322 reflector.go:255] Listing and watching *v1beta1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:134
W0408 04:13:36.721441  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/apis/storage.k8s.io/v1/csidrivers", Verb:"list", APIPrefix:"apis", APIGroup:"storage.k8s.io", APIVersion:"v1", Namespace:"", Resource:"csidrivers", Subresource:"", Name:"", Parts:[]string{"csidrivers"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.721825  131322 reflector.go:219] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:36.721871  131322 reflector.go:255] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
W0408 04:13:36.724984  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"", Parts:[]string{"nodes"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
W0408 04:13:36.725276  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/pods", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"pods", Subresource:"", Name:"", Parts:[]string{"pods"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
W0408 04:13:36.725538  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/services", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"services", Subresource:"", Name:"", Parts:[]string{"services"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.725742  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/nodes?limit=500&resourceVersion=0" latency="4.408612ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33778" resp=200
W0408 04:13:36.725766  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/replicationcontrollers", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"replicationcontrollers", Subresource:"", Name:"", Parts:[]string{"replicationcontrollers"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
W0408 04:13:36.726012  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/apis/storage.k8s.io/v1/csinodes", Verb:"list", APIPrefix:"apis", APIGroup:"storage.k8s.io", APIVersion:"v1", Namespace:"", Resource:"csinodes", Subresource:"", Name:"", Parts:[]string{"csinodes"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.726151  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services?limit=500&resourceVersion=0" latency="4.726097ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33784" resp=200
I0408 04:13:36.726226  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/replicationcontrollers?limit=500&resourceVersion=0" latency="4.747507ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33780" resp=200
W0408 04:13:36.726188  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/apis/apps/v1/statefulsets", Verb:"list", APIPrefix:"apis", APIGroup:"apps", APIVersion:"v1", Namespace:"", Resource:"statefulsets", Subresource:"", Name:"", Parts:[]string{"statefulsets"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.726466  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0" latency="4.998793ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33786" resp=200
W0408 04:13:36.726429  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/apis/policy/v1/poddisruptionbudgets", Verb:"list", APIPrefix:"apis", APIGroup:"policy", APIVersion:"v1", Namespace:"", Resource:"poddisruptionbudgets", Subresource:"", Name:"", Parts:[]string{"poddisruptionbudgets"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.726662  131322 get.go:260] "Starting watch" path="/api/v1/nodes" resourceVersion="83726" labels="" fields="" timeout="6m40s"
W0408 04:13:36.726692  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/apis/apps/v1/replicasets", Verb:"list", APIPrefix:"apis", APIGroup:"apps", APIVersion:"v1", Namespace:"", Resource:"replicasets", Subresource:"", Name:"", Parts:[]string{"replicasets"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.726931  131322 get.go:260] "Starting watch" path="/api/v1/replicationcontrollers" resourceVersion="83726" labels="" fields="" timeout="8m33s"
I0408 04:13:36.726937  131322 get.go:260] "Starting watch" path="/apis/storage.k8s.io/v1/csinodes" resourceVersion="83728" labels="" fields="" timeout="9m25s"
I0408 04:13:36.727078  131322 get.go:260] "Starting watch" path="/api/v1/services" resourceVersion="83726" labels="" fields="" timeout="6m18s"
W0408 04:13:36.727269  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/apis/storage.k8s.io/v1beta1/csistoragecapacities", Verb:"list", APIPrefix:"apis", APIGroup:"storage.k8s.io", APIVersion:"v1beta1", Namespace:"", Resource:"csistoragecapacities", Subresource:"", Name:"", Parts:[]string{"csistoragecapacities"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
W0408 04:13:36.727577  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/persistentvolumes", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"persistentvolumes", Subresource:"", Name:"", Parts:[]string{"persistentvolumes"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
W0408 04:13:36.727821  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/persistentvolumeclaims", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"persistentvolumeclaims", Subresource:"", Name:"", Parts:[]string{"persistentvolumeclaims"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.727965  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/persistentvolumes?limit=500&resourceVersion=0" latency="6.382202ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33788" resp=200
W0408 04:13:36.727938  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/apis/storage.k8s.io/v1/storageclasses", Verb:"list", APIPrefix:"apis", APIGroup:"storage.k8s.io", APIVersion:"v1", Namespace:"", Resource:"storageclasses", Subresource:"", Name:"", Parts:[]string{"storageclasses"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.728210  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0" latency="6.851289ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33776" resp=200
I0408 04:13:36.728241  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0" latency="6.925291ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33774" resp=200
I0408 04:13:36.728353  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0" latency="5.907909ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33796" resp=200
I0408 04:13:36.728392  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0" latency="7.020813ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33782" resp=200
I0408 04:13:36.728600  131322 get.go:260] "Starting watch" path="/api/v1/persistentvolumes" resourceVersion="83726" labels="" fields="" timeout="6m39s"
I0408 04:13:36.728847  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/statefulsets?limit=500&resourceVersion=0" latency="7.363935ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=200
I0408 04:13:36.728901  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0" latency="7.419019ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33794" resp=200
I0408 04:13:36.728913  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/replicasets?limit=500&resourceVersion=0" latency="7.404207ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33792" resp=200
I0408 04:13:36.728918  131322 get.go:260] "Starting watch" path="/api/v1/pods" resourceVersion="83726" labels="" fields="status.phase!=Failed,status.phase!=Succeeded" timeout="6m58s"
I0408 04:13:36.728852  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0" latency="7.536454ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=200
I0408 04:13:36.729075  131322 get.go:260] "Starting watch" path="/apis/storage.k8s.io/v1/csidrivers" resourceVersion="83728" labels="" fields="" timeout="5m46s"
I0408 04:13:36.729338  131322 get.go:260] "Starting watch" path="/api/v1/persistentvolumeclaims" resourceVersion="83726" labels="" fields="" timeout="9m5s"
W0408 04:13:36.729454  131322 warnings.go:70] storage.k8s.io/v1beta1 CSIStorageCapacity is deprecated in v1.24+, unavailable in v1.27+
I0408 04:13:36.729570  131322 get.go:260] "Starting watch" path="/apis/apps/v1/replicasets" resourceVersion="83728" labels="" fields="" timeout="5m41s"
I0408 04:13:36.729606  131322 get.go:260] "Starting watch" path="/apis/apps/v1/statefulsets" resourceVersion="83728" labels="" fields="" timeout="5m33s"
I0408 04:13:36.729620  131322 get.go:260] "Starting watch" path="/apis/policy/v1/poddisruptionbudgets" resourceVersion="83727" labels="" fields="" timeout="7m59s"
I0408 04:13:36.730090  131322 get.go:260] "Starting watch" path="/apis/storage.k8s.io/v1beta1/csistoragecapacities" resourceVersion="83728" labels="" fields="" timeout="7m55s"
I0408 04:13:36.730125  131322 get.go:260] "Starting watch" path="/apis/storage.k8s.io/v1/storageclasses" resourceVersion="83728" labels="" fields="" timeout="5m10s"
W0408 04:13:36.730347  131322 warnings.go:70] storage.k8s.io/v1beta1 CSIStorageCapacity is deprecated in v1.24+, unavailable in v1.27+
I0408 04:13:36.736181  131322 shared_informer.go:270] caches populated
I0408 04:13:36.736208  131322 shared_informer.go:270] caches populated
I0408 04:13:36.736361  131322 httplog.go:89] "HTTP" verb="GET" URI="/readyz" latency="1.445606ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=200
I0408 04:13:36.738111  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default" latency="934.844µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=404
I0408 04:13:36.741858  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces" latency="3.326406ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
I0408 04:13:36.743476  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default/services/kubernetes" latency="1.126166ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=404
I0408 04:13:36.749331  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/default/services" latency="5.296457ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
I0408 04:13:36.752601  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="2.720213ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=404
I0408 04:13:36.755105  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/default/endpoints" latency="1.97848ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
I0408 04:13:36.756605  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes" latency="988.062µs" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=404
I0408 04:13:36.758816  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/discovery.k8s.io/v1/namespaces/default/endpointslices" latency="1.715836ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
I0408 04:13:36.820608  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820647  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820655  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820660  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820665  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820671  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820676  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820681  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820685  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820694  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820700  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820705  131322 shared_informer.go:270] caches populated
I0408 04:13:36.820710  131322 shared_informer.go:270] caches populated
W0408 04:13:36.821294  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes", Verb:"create", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"", Parts:[]string{"nodes"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.825224  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/nodes" latency="4.02593ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
I0408 04:13:36.825368  131322 node_tree.go:65] Added node "testnode-0" in group "" to NodeTree
I0408 04:13:36.825408  131322 eventhandlers.go:101] "Add event for node" node="testnode-0"
W0408 04:13:36.825976  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes", Verb:"create", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"", Parts:[]string{"nodes"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.828712  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/nodes" latency="2.82111ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
I0408 04:13:36.828792  131322 node_tree.go:65] Added node "testnode-1" in group "" to NodeTree
I0408 04:13:36.828819  131322 eventhandlers.go:101] "Add event for node" node="testnode-1"
W0408 04:13:36.829395  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes", Verb:"create", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"", Parts:[]string{"nodes"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.834327  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/nodes" latency="5.052063ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
I0408 04:13:36.834478  131322 node_tree.go:65] Added node "testnode-2" in group "" to NodeTree
I0408 04:13:36.834516  131322 eventhandlers.go:101] "Add event for node" node="testnode-2"
W0408 04:13:36.834952  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes", Verb:"create", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"", Parts:[]string{"nodes"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.837970  131322 node_tree.go:65] Added node "testnode-3" in group "" to NodeTree
I0408 04:13:36.838005  131322 eventhandlers.go:101] "Add event for node" node="testnode-3"
I0408 04:13:36.838354  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/nodes" latency="3.477421ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
W0408 04:13:36.939208  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes", Verb:"create", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"", Parts:[]string{"nodes"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.943114  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/nodes" latency="4.093501ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
W0408 04:13:36.943758  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pods", Verb:"create", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145", Resource:"pods", Subresource:"", Name:"", Parts:[]string{"pods"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.946030  131322 eventhandlers.go:164] "Add event for unscheduled pod" pod="nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pod-with-node-affinity"
I0408 04:13:36.946151  131322 scheduling_queue.go:849] "About to try and schedule pod" pod="nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pod-with-node-affinity"
I0408 04:13:36.946171  131322 scheduler.go:459] "Attempting to schedule pod" pod="nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pod-with-node-affinity"
I0408 04:13:36.946423  131322 default_binder.go:51] "Attempting to bind pod to node" pod="nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pod-with-node-affinity" node="testnode-2"
I0408 04:13:36.947162  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pods" latency="3.457009ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
W0408 04:13:36.947713  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pods/pod-with-node-affinity/binding", Verb:"create", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145", Resource:"pods", Subresource:"binding", Name:"pod-with-node-affinity", Parts:[]string{"pods", "pod-with-node-affinity", "binding"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.949294  131322 node_tree.go:65] Added node "testnode-4" in group "" to NodeTree
I0408 04:13:36.949331  131322 eventhandlers.go:101] "Add event for node" node="testnode-4"
I0408 04:13:36.949775  131322 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pods/pod-with-node-affinity/binding" latency="2.155452ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
I0408 04:13:36.949982  131322 eventhandlers.go:201] "Delete event for unscheduled pod" pod="nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pod-with-node-affinity"
I0408 04:13:36.950030  131322 scheduler.go:604] "Successfully bound pod to node" pod="nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pod-with-node-affinity" node="testnode-2" evaluatedNodes=4 feasibleNodes=4
I0408 04:13:36.950048  131322 eventhandlers.go:221] "Add event for scheduled pod" pod="nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pod-with-node-affinity"
W0408 04:13:36.950481  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/apis/events.k8s.io/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/events", Verb:"create", APIPrefix:"apis", APIGroup:"events.k8s.io", APIVersion:"v1", Namespace:"nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145", Resource:"events", Subresource:"", Name:"", Parts:[]string{"events"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:36.954012  131322 httplog.go:89] "HTTP" verb="POST" URI="/apis/events.k8s.io/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/events" latency="3.61763ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=201
W0408 04:13:37.047956  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pods/pod-with-node-affinity", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145", Resource:"pods", Subresource:"", Name:"pod-with-node-affinity", Parts:[]string{"pods", "pod-with-node-affinity"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:37.050314  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pods/pod-with-node-affinity" latency="2.470761ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=200
W0408 04:13:37.050957  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pods/pod-with-node-affinity", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145", Resource:"pods", Subresource:"", Name:"pod-with-node-affinity", Parts:[]string{"pods", "pod-with-node-affinity"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:37.052622  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/nodeaffinity047751fe-828f-40fd-a494-3e4c6d83d145/pods/pod-with-node-affinity" latency="1.745578ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=200
    priorities_test.go:113: Pod pod-with-node-affinity got scheduled on an unexpected node: testnode-2. Expected node: testnode-4.
I0408 04:13:37.053219  131322 reflector.go:225] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053218  131322 reflector.go:225] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053212  131322 reflector.go:225] Stopping reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053253  131322 reflector.go:225] Stopping reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053339  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=83728&timeout=5m10s&timeoutSeconds=310&watch=true" latency="323.408866ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33800" resp=0
I0408 04:13:37.053352  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=83726&timeout=6m39s&timeoutSeconds=399&watch=true" latency="324.86409ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33788" resp=0
I0408 04:13:37.053376  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services?allowWatchBookmarks=true&resourceVersion=83726&timeout=6m18s&timeoutSeconds=378&watch=true" latency="326.758844ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33780" resp=0
I0408 04:13:37.053391  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=83728&timeout=5m33s&timeoutSeconds=333&watch=true" latency="323.945803ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33414" resp=0
I0408 04:13:37.053435  131322 reflector.go:225] Stopping reflector *v1.CSIDriver (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053462  131322 reflector.go:225] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053477  131322 reflector.go:225] Stopping reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053492  131322 reflector.go:225] Stopping reflector *v1beta1.CSIStorageCapacity (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053500  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=83728&timeout=5m46s&timeoutSeconds=346&watch=true" latency="324.753154ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33774" resp=0
I0408 04:13:37.053503  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=83726&timeout=9m5s&timeoutSeconds=545&watch=true" latency="324.396046ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33798" resp=0
I0408 04:13:37.053494  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1/csistoragecapacities?allowWatchBookmarks=true&resourceVersion=83728&timeout=7m55s&timeoutSeconds=475&watch=true" latency="323.64019ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33802" resp=0
I0408 04:13:37.053511  131322 reflector.go:225] Stopping reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053541  131322 reflector.go:225] Stopping reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053600  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/policy/v1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=83727&timeout=7m59s&timeoutSeconds=479&watch=true" latency="324.340696ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33792" resp=0
I0408 04:13:37.053605  131322 reflector.go:225] Stopping reflector *v1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053619  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&resourceVersion=83726&timeout=6m58s&timeoutSeconds=418&watch=true" latency="324.881225ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33782" resp=0
W0408 04:13:37.053462  131322 apf_controller.go:787] no match found for request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes", Verb:"deletecollection", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"", Parts:[]string{"nodes"}} and user &user.DefaultInfo{Name:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}; selecting catchAll={"metadata":{"name":"catch-all","uid":"ee628807-c271-4be7-b63c-735a33e85318","resourceVersion":"83765","generation":1,"creationTimestamp":"2021-04-08T04:13:32Z"},"spec":{"priorityLevelConfiguration":{"name":"catch-all"},"matchingPrecedence":10000,"distinguisherMethod":{"type":"ByUser"},"rules":[{"subjects":[{"kind":"Group","group":{"name":"system:unauthenticated"}},{"kind":"Group","group":{"name":"system:authenticated"}}],"resourceRules":[{"verbs":["*"],"apiGroups":["*"],"resources":["*"],"clusterScope":true,"namespaces":["*"]}],"nonResourceRules":[{"verbs":["*"],"nonResourceURLs":["*"]}]}]},"status":{"conditions":[{"type":"Dangling","status":"False","lastTransitionTime":"2021-04-08T04:13:32Z","reason":"Found","message":"This FlowSchema references the PriorityLevelConfiguration object named \"catch-all\" and it exists"}]}} as fallback flow schema
I0408 04:13:37.053666  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=83726&timeout=8m33s&timeoutSeconds=513&watch=true" latency="326.94134ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33786" resp=0
I0408 04:13:37.053626  131322 reflector.go:225] Stopping reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.053724  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=83728&timeout=9m25s&timeoutSeconds=565&watch=true" latency="326.86564ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33784" resp=0
I0408 04:13:37.053730  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=83726&timeout=6m40s&timeoutSeconds=400&watch=true" latency="327.337936ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33778" resp=0
I0408 04:13:37.053733  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=83728&timeout=5m41s&timeoutSeconds=341&watch=true" latency="324.398133ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33540" resp=0
I0408 04:13:37.053811  131322 reflector.go:225] Stopping reflector *v1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.066799  131322 httplog.go:89] "HTTP" verb="DELETE" URI="/api/v1/nodes" latency="13.435979ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=200
I0408 04:13:37.067027  131322 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0408 04:13:37.068382  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="1.16618ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=200
I0408 04:13:37.070995  131322 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="2.001779ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=200
I0408 04:13:37.072979  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes" latency="1.345545ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=200
I0408 04:13:37.075564  131322 httplog.go:89] "HTTP" verb="PUT" URI="/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes" latency="1.974174ms" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33804" resp=200
I0408 04:13:37.076016  131322 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0408 04:13:37.076052  131322 apf_controller.go:303] Shutting down API Priority and Fairness config worker
I0408 04:13:37.076119  131322 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=83726&timeout=9m49s&timeoutSeconds=589&watch=true" latency="4.658654756s" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33380" resp=0
I0408 04:13:37.076133  131322 reflector.go:225] Stopping reflector *v1beta1.FlowSchema (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.076133  131322 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/controlplane/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0408 04:13:37.076175  131322 reflector.go:225] Stopping reflector *v1beta1.PriorityLevelConfiguration (0s) from k8s.io/client-go/informers/factory.go:134
I0408 04:13:37.076179  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?allowWatchBookmarks=true&resourceVersion=83728&timeout=9m33s&timeoutSeconds=573&watch=true" latency="4.658797477s" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33378" resp=0
I0408 04:13:37.076232  131322 httplog.go:89] "HTTP" verb="GET" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?allowWatchBookmarks=true&resourceVersion=83728&timeout=5m14s&timeoutSeconds=314&watch=true" latency="4.659323049s" userAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:33384" resp=0
--- FAIL: TestNodeAffinity (4.83s)

				from junit_20210408-035822.xml

Find to mentions in log files | View test history on testgrid


Show 3313 Passed Tests

Show 27 Skipped Tests

Error lines from build-log.txt

... skipping 70 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 156: bogus-expected-to-fail: command not found
!!! [0408 03:45:34] Call tree:
!!! [0408 03:45:34]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0408 03:45:34]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0408 03:45:34]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:132 juLog(...)
!!! [0408 03:45:34]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:160 record_command(...)
!!! [0408 03:45:34]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0408 03:45:34] Running kubeadm tests
+++ [0408 03:45:39] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0408 03:46:33] Running tests without code coverage
{"Time":"2021-04-08T03:48:14.386412276Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t49.394s\n"}
✓  cmd/kubeadm/test/cmd (49.397s)
... skipping 352 lines ...
I0408 03:50:56.172577   60000 client.go:360] parsed scheme: "passthrough"
I0408 03:50:56.172637   60000 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0408 03:50:56.172648   60000 clientconn.go:948] ClientConn switching balancer to "pick_first"
+++ [0408 03:51:03] Generate kubeconfig for controller-manager
+++ [0408 03:51:03] Starting controller-manager
I0408 03:51:04.635780   63761 serving.go:347] Generated self-signed cert in-memory
W0408 03:51:05.120133   63761 authentication.go:410] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0408 03:51:05.120203   63761 authentication.go:307] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0408 03:51:05.120211   63761 authentication.go:331] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0408 03:51:05.120228   63761 authorization.go:216] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0408 03:51:05.120250   63761 authorization.go:184] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0408 03:51:05.120310   63761 controllermanager.go:175] Version: v1.22.0-alpha.0.36+6c79f498206ade
I0408 03:51:05.121807   63761 secure_serving.go:197] Serving securely on [::]:10257
I0408 03:51:05.121892   63761 tlsconfig.go:240] Starting DynamicServingCertificateController
I0408 03:51:05.122517   63761 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0408 03:51:05.123018   63761 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 97 lines ...
I0408 03:51:05.737143   63761 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0408 03:51:05.737178   63761 controllermanager.go:574] Started "resourcequota"
I0408 03:51:05.737215   63761 resource_quota_controller.go:273] Starting resource quota controller
I0408 03:51:05.737237   63761 shared_informer.go:240] Waiting for caches to sync for resource quota
I0408 03:51:05.737278   63761 resource_quota_monitor.go:304] QuotaMonitor running
I0408 03:51:05.737474   63761 node_lifecycle_controller.go:76] Sending events to api server
E0408 03:51:05.737535   63761 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W0408 03:51:05.737546   63761 controllermanager.go:566] Skipping "cloud-node-lifecycle"
I0408 03:51:05.738094   63761 controllermanager.go:574] Started "endpointslicemirroring"
W0408 03:51:05.738196   63761 controllermanager.go:566] Skipping "csrsigning"
W0408 03:51:05.738214   63761 controllermanager.go:553] "tokencleaner" is disabled
I0408 03:51:05.738164   63761 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
I0408 03:51:05.738599   63761 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
... skipping 29 lines ...
I0408 03:51:05.756827   63761 node_lifecycle_controller.go:377] Sending events to api server.
I0408 03:51:05.757120   63761 taint_manager.go:163] "Sending events to api server"
I0408 03:51:05.757212   63761 node_lifecycle_controller.go:505] Controller will reconcile labels.
I0408 03:51:05.757232   63761 controllermanager.go:574] Started "nodelifecycle"
I0408 03:51:05.757428   63761 node_lifecycle_controller.go:539] Starting node controller
I0408 03:51:05.757459   63761 shared_informer.go:240] Waiting for caches to sync for taint
E0408 03:51:05.757648   63761 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0408 03:51:05.757665   63761 controllermanager.go:566] Skipping "service"
I0408 03:51:05.759191   63761 controllermanager.go:574] Started "persistentvolume-expander"
I0408 03:51:05.759620   63761 expand_controller.go:324] Starting expand controller
I0408 03:51:05.759640   63761 shared_informer.go:240] Waiting for caches to sync for expand
I0408 03:51:05.759695   63761 controllermanager.go:574] Started "cronjob"
I0408 03:51:05.760404   63761 cronjob_controllerv2.go:125] Starting cronjob controller v2
... skipping 16 lines ...
I0408 03:51:05.763685   63761 replica_set.go:182] Starting replicationcontroller controller
I0408 03:51:05.763699   63761 shared_informer.go:240] Waiting for caches to sync for ReplicationController
I0408 03:51:05.764467   63761 controllermanager.go:574] Started "horizontalpodautoscaling"
I0408 03:51:05.765166   63761 shared_informer.go:240] Waiting for caches to sync for resource quota
I0408 03:51:05.765268   63761 horizontal.go:169] Starting HPA controller
I0408 03:51:05.765279   63761 shared_informer.go:240] Waiting for caches to sync for HPA
W0408 03:51:05.786934   63761 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0408 03:51:05.812787   63761 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 03:51:05.813310   63761 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 03:51:05.813594   63761 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0408 03:51:05.813833   63761 shared_informer.go:247] Caches are synced for PV protection 
W0408 03:51:05.814069   63761 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0408 03:51:05.814133   63761 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 37 lines ...
I0408 03:51:06.156602   63761 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I0408 03:51:06.237797   63761 shared_informer.go:247] Caches are synced for resource quota 
I0408 03:51:06.257179   63761 shared_informer.go:247] Caches are synced for disruption 
I0408 03:51:06.257216   63761 disruption.go:371] Sending events to api server.
I0408 03:51:06.264044   63761 shared_informer.go:247] Caches are synced for ReplicationController 
I0408 03:51:06.265378   63761 shared_informer.go:247] Caches are synced for resource quota 
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocated ip:10.0.0.1 with error:provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   38s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 100 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0408 03:51:11] Creating namespace namespace-1617853871-19890
namespace/namespace-1617853871-19890 created
Context "test" modified.
+++ [0408 03:51:11] Testing RESTMapper
+++ [0408 03:51:12] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 63 lines ...
namespace/namespace-1617853878-2812 created
Context "test" modified.
+++ [0408 03:51:18] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
I0408 03:51:26.517490   60000 client.go:360] parsed scheme: "passthrough"
I0408 03:51:26.517579   60000 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0408 03:51:26.517595   60000 clientconn.go:948] ClientConn switching balancer to "pick_first"
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
... skipping 32 lines ...
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1617853888-11146 namespace.
has:Role is deprecated
Successful
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1617853888-11146 namespace.
Error: 1 warning received
has:Role is deprecated
Successful
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1617853888-11146 namespace.
Error: 1 warning received
has:Error: 1 warning received
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:163: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:164: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:165: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 412 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:210: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:215: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 24 lines ...
(BWarning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/test-pdb-3 created
core.sh:265: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(BWarning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/test-pdb-4 created
core.sh:269: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:275: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 221 lines ...
core.sh:534: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.4.1:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:554: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0408 03:52:03] "kubectl patch with resourceVersion 600" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:578: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0408 03:52:04.320584   63761 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:606: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:631: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:647: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 29 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:3.4.1
    name: kubernetes-pause
has:localonlyvalue
core.sh:683: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:687: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:699: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 86 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0408 03:52:16] Creating namespace namespace-1617853936-9407
namespace/namespace-1617853936-9407 created
Context "test" modified.
+++ [0408 03:52:16] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 44 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0408 03:52:16] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

+++ Running case: test-cmd.run_kubectl_apply_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 29 lines ...
I0408 03:52:20.548494   63761 event.go:291] "Event occurred" object="namespace-1617853937-10147/test-deployment-retainkeys-8695b756f8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-8695b756f8-4hwvn"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0408 03:52:21.792236   72177 helpers.go:571] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
pod/test-pod created (dry run)
pod/test-pod created (dry run)
... skipping 34 lines ...
(Bpod/b created
apply.sh:196: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:197: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
service/prune-svc created
I0408 03:52:30.822679   63761 horizontal.go:361] Horizontal Pod Autoscaler frontend has been deleted in namespace-1617853933-24671
Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
... skipping 39 lines ...
(Bpod/b unchanged
pod/a pruned
Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
apply.sh:254: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
Successful
message:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:265: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:269: Successful get services a {{.metadata.name}}: a
(BSuccessful
message:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 26 lines ...
(Bapply.sh:291: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:292: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
message:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:300: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
message:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:308: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
apply.sh:314: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:configmap/foo created
error: unable to recognize "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:320: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:326: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:pod/pod-a created
... skipping 6 lines ...
pod "pod-c" deleted
apply.sh:334: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:338: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: unable to recognize "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
has:no matches for kind "Widget" in version "example.com/v1"
I0408 03:53:12.065741   60000 client.go:360] parsed scheme: "endpoint"
I0408 03:53:12.065794   60000 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
Successful
message:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:344: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0408 03:53:14.379554   63761 namespace_controller.go:185] Namespace has been deleted multi-resource-ns
I0408 03:53:14.425338   60000 controller.go:611] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
... skipping 37 lines ...
I0408 03:53:17.342331   60000 client.go:360] parsed scheme: "passthrough"
I0408 03:53:17.342393   60000 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0408 03:53:17.342410   60000 clientconn.go:948] ClientConn switching balancer to "pick_first"
apply.sh:403: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0408 03:53:17] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 79 lines ...
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0408 03:53:21] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I0408 03:53:25.103989   63761 event.go:291] "Event occurred" object="namespace-1617854002-29020/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-9bb9c4878 to 3"
I0408 03:53:25.108525   63761 event.go:291] "Event occurred" object="namespace-1617854002-29020/nginx-9bb9c4878" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-9bb9c4878-zvzk6"
I0408 03:53:25.114689   63761 event.go:291] "Event occurred" object="namespace-1617854002-29020/nginx-9bb9c4878" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-9bb9c4878-dz2sq"
I0408 03:53:25.116395   63761 event.go:291] "Event occurred" object="namespace-1617854002-29020/nginx-9bb9c4878" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-9bb9c4878-7pf4d"
apps.sh:152: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1617854002-29020\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1617854002-29020"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0408 03:53:33.819037   63761 event.go:291] "Event occurred" object="namespace-1617854002-29020/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-6dd6cfdb57 to 3"
I0408 03:53:33.823867   63761 event.go:291] "Event occurred" object="namespace-1617854002-29020/nginx-6dd6cfdb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6dd6cfdb57-xgrld"
I0408 03:53:33.830803   63761 event.go:291] "Event occurred" object="namespace-1617854002-29020/nginx-6dd6cfdb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6dd6cfdb57-qpdds"
I0408 03:53:33.832488   63761 event.go:291] "Event occurred" object="namespace-1617854002-29020/nginx-6dd6cfdb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6dd6cfdb57-clmhl"
Successful
... skipping 300 lines ...
+++ [0408 03:53:42] Creating namespace namespace-1617854022-14124
namespace/namespace-1617854022-14124 created
Context "test" modified.
+++ [0408 03:53:42] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1617854022-14124 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1617854022-14124 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0408 03:53:44.871885   75659 loader.go:372] Config loaded from file:  /tmp/tmp.gSbHBgwJKa/.kube/config
I0408 03:53:44.878293   75659 round_trippers.go:454] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
I0408 03:53:44.907613   75659 round_trippers.go:454] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
I0408 03:53:44.910064   75659 round_trippers.go:454] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 2 milliseconds
... skipping 594 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2021-04-08T03:53:52Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2021-04-08T03:53:52Z"}}, "name":"valid-pod", "namespace":"namespace-1617854032-16314", "resourceVersion":"1053", "uid":"6d2035ae-9c29-4603-ab92-eee4bfd927b1"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2021-04-08T03:53:52Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2021-04-08T03:53:52Z"}],"name":"valid-pod","namespace":"namespace-1617854032-16314","resourceVersion":"1053","uid":"6d2035ae-9c29-4603-ab92-eee4bfd927b1"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2021-04-08T03:53:52Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2021-04-08T03:53:52Z]] name:valid-pod namespace:namespace-1617854032-16314 resourceVersion:1053 uid:6d2035ae-9c29-4603-ab92-eee4bfd927b1] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 84 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 39 lines ...
+++ [0408 03:54:01] Creating namespace namespace-1617854041-13259
namespace/namespace-1617854041-13259 created
Context "test" modified.
+++ [0408 03:54:02] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0408 03:54:02] Creating namespace namespace-1617854042-8059
namespace/namespace-1617854042-8059 created
Context "test" modified.
+++ [0408 03:54:02] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0408 03:54:03.684315   63761 event.go:291] "Event occurred" object="namespace-1617854042-8059/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-m48tr"
I0408 03:54:03.689223   63761 event.go:291] "Event occurred" object="namespace-1617854042-8059/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-mssv7"
I0408 03:54:03.689273   63761 event.go:291] "Event occurred" object="namespace-1617854042-8059/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-7rj8v"
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-7rj8v does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-7rj8v does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"1b593f8a-ed8d-4eed-8912-b1513662e5b0","resourceVersion":"1135","creationTimestamp":"2021-04-08T03:54:05Z"}}
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"1b593f8a-ed8d-4eed-8912-b1513662e5b0","resourceVersion":"1136","creationTimestamp":"2021-04-08T03:54:05Z"},"data":{"key1":"config1"}}
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"1b593f8a-ed8d-4eed-8912-b1513662e5b0","resourceVersion":"1136","creationTimestamp":"2021-04-08T03:54:05Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"1b593f8a-ed8d-4eed-8912-b1513662e5b0"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 73 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:apps/v1beta1
deployment.apps "nginx" deleted
Successful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
Successful
message:nginx:
has:nginx:
+++ exit code: 0
Recording: run_kubectl_delete_allnamespaces_tests
... skipping 104 lines ...
has:Timeout
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 158 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0408 03:54:23] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 288 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0408 03:54:50] Testing recursive resources
+++ [0408 03:54:50] Creating namespace namespace-1617854090-26649
namespace/namespace-1617854090-26649 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0408 03:54:51.226884   60000 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
E0408 03:54:51.228733   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
W0408 03:54:51.374611   60000 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0408 03:54:51.376317   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0408 03:54:51.485026   60000 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
E0408 03:54:51.486859   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
W0408 03:54:51.601782   60000 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
E0408 03:54:51.603536   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0408 03:54:52.271764   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Name:         busybox0
Namespace:    namespace-1617854090-26649
Priority:     0
Node:         <none>
Labels:       app=busybox0
... skipping 154 lines ...
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0408 03:54:52.515055   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0408 03:54:52.752419   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0408 03:54:52.809925   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox0 configured
Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0408 03:54:54.704960   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-6k87f"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0408 03:54:54.713199   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-d88tx"
generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0408 03:54:54.839995   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0408 03:54:54.899455   63761 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1
(BE0408 03:54:55.107174   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE0408 03:54:55.257349   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E0408 03:54:55.547668   63761 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0408 03:54:56.724580   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-wpbdh"
I0408 03:54:56.738504   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-t5k9t"
generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0408 03:54:57.616695   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/nginx1-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-758b5949b6 to 2"
I0408 03:54:57.616765   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/nginx0-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-75db9cdfd9 to 2"
I0408 03:54:57.625049   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/nginx1-deployment-758b5949b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-758b5949b6-l8h5x"
I0408 03:54:57.627369   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/nginx0-deployment-75db9cdfd9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-75db9cdfd9-lrnq5"
I0408 03:54:57.630821   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/nginx1-deployment-758b5949b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-758b5949b6-f6fnb"
I0408 03:54:57.635187   63761 event.go:291] "Event occurred" object="namespace-1617854090-26649/nginx0-deployment-75db9cdfd9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-75db9cdfd9-42jnp"
generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},