This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjdef: Fix docker/journald logging conformance
ResultFAILURE
Tests 1 failed / 2480 succeeded
Started2020-02-14 20:00
Elapsed26m30s
Revision0e178f9341405d974bc493c75a4188b83f2ba189
Refs 87933

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPostBindPlugin 4.17s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPostBindPlugin$
=== RUN   TestPostBindPlugin
W0214 20:22:53.861458  112516 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0214 20:22:53.861486  112516 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0214 20:22:53.861498  112516 master.go:314] Node port range unspecified. Defaulting to 30000-32767.
I0214 20:22:53.861510  112516 master.go:270] Using reconciler: 
I0214 20:22:53.861648  112516 config.go:625] Not requested to run hook priority-and-fairness-config-consumer
I0214 20:22:53.863377  112516 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.863558  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.863664  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.864416  112516 store.go:1362] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0214 20:22:53.864474  112516 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.864532  112516 reflector.go:211] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0214 20:22:53.864791  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.864816  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.866047  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.867315  112516 store.go:1362] Monitoring events count at <storage-prefix>//events
I0214 20:22:53.867495  112516 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.867778  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.867907  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.868440  112516 reflector.go:211] Listing and watching *core.Event from storage/cacher.go:/events
I0214 20:22:53.871667  112516 store.go:1362] Monitoring limitranges count at <storage-prefix>//limitranges
I0214 20:22:53.871761  112516 reflector.go:211] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0214 20:22:53.871879  112516 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.872024  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.872053  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.872188  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.873188  112516 store.go:1362] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0214 20:22:53.873292  112516 reflector.go:211] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0214 20:22:53.873441  112516 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.873554  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.873573  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.873785  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.874626  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.875230  112516 store.go:1362] Monitoring secrets count at <storage-prefix>//secrets
I0214 20:22:53.875407  112516 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.875431  112516 reflector.go:211] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0214 20:22:53.875516  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.875537  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.876374  112516 store.go:1362] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0214 20:22:53.876537  112516 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.876768  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.876773  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.876858  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.876913  112516 reflector.go:211] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0214 20:22:53.878011  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.878428  112516 store.go:1362] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0214 20:22:53.878714  112516 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.878907  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.878949  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.879358  112516 reflector.go:211] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0214 20:22:53.879820  112516 store.go:1362] Monitoring configmaps count at <storage-prefix>//configmaps
I0214 20:22:53.880477  112516 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.880505  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.880683  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.880708  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.880896  112516 reflector.go:211] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0214 20:22:53.881891  112516 store.go:1362] Monitoring namespaces count at <storage-prefix>//namespaces
I0214 20:22:53.881940  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.882023  112516 reflector.go:211] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0214 20:22:53.882369  112516 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.882537  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.882556  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.883093  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.884065  112516 store.go:1362] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0214 20:22:53.884196  112516 reflector.go:211] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0214 20:22:53.884312  112516 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.884465  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.884488  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.885054  112516 store.go:1362] Monitoring nodes count at <storage-prefix>//minions
I0214 20:22:53.885206  112516 reflector.go:211] Listing and watching *core.Node from storage/cacher.go:/minions
I0214 20:22:53.885302  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.885398  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.885537  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.885566  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.887792  112516 store.go:1362] Monitoring pods count at <storage-prefix>//pods
I0214 20:22:53.887882  112516 reflector.go:211] Listing and watching *core.Pod from storage/cacher.go:/pods
I0214 20:22:53.888197  112516 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.888897  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.888928  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.888967  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.889742  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.889770  112516 store.go:1362] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0214 20:22:53.889799  112516 reflector.go:211] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0214 20:22:53.890279  112516 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.890474  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.890528  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.890834  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.891604  112516 store.go:1362] Monitoring services count at <storage-prefix>//services/specs
I0214 20:22:53.891667  112516 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.891813  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.891839  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.891874  112516 reflector.go:211] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0214 20:22:53.893564  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.893593  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.893800  112516 watch_cache.go:449] Replace watchCache (rev: 29283) 
I0214 20:22:53.896880  112516 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.897041  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.897077  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.897846  112516 store.go:1362] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0214 20:22:53.897870  112516 rest.go:113] the default service ipfamily for this cluster is: IPv4
I0214 20:22:53.897940  112516 reflector.go:211] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0214 20:22:53.898731  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:53.898755  112516 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.898996  112516 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.900554  112516 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.901158  112516 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.902243  112516 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.902931  112516 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.903302  112516 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.903454  112516 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.903683  112516 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.904822  112516 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.905548  112516 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.905874  112516 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.907296  112516 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.907672  112516 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.908320  112516 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.908903  112516 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.909709  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.910019  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.910674  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.910940  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.913977  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.914325  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.914644  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.915608  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.916457  112516 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.920183  112516 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.921632  112516 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.921978  112516 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.922304  112516 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.923091  112516 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.923431  112516 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.924678  112516 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.925732  112516 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.926513  112516 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.927989  112516 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.928291  112516 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.928425  112516 master.go:527] Skipping disabled API group "auditregistration.k8s.io".
I0214 20:22:53.928450  112516 master.go:538] Enabling API group "authentication.k8s.io".
I0214 20:22:53.928472  112516 master.go:538] Enabling API group "authorization.k8s.io".
I0214 20:22:53.928828  112516 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.929130  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.929246  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.930122  112516 store.go:1362] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0214 20:22:53.930330  112516 reflector.go:211] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0214 20:22:53.930783  112516 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.930895  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.930914  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.932153  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:53.932177  112516 store.go:1362] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0214 20:22:53.932373  112516 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.932491  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.932510  112516 reflector.go:211] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0214 20:22:53.932532  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.933345  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:53.936795  112516 store.go:1362] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0214 20:22:53.936823  112516 master.go:538] Enabling API group "autoscaling".
I0214 20:22:53.936859  112516 reflector.go:211] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0214 20:22:53.937014  112516 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.937171  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.937198  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.937858  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:53.938327  112516 store.go:1362] Monitoring jobs.batch count at <storage-prefix>//jobs
I0214 20:22:53.938533  112516 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.938645  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.938732  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.938831  112516 reflector.go:211] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0214 20:22:53.939776  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:53.940662  112516 store.go:1362] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0214 20:22:53.940692  112516 master.go:538] Enabling API group "batch".
I0214 20:22:53.940720  112516 reflector.go:211] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0214 20:22:53.941011  112516 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.941247  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.941316  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.941636  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:53.941966  112516 store.go:1362] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0214 20:22:53.941989  112516 master.go:538] Enabling API group "certificates.k8s.io".
I0214 20:22:53.942069  112516 reflector.go:211] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0214 20:22:53.942590  112516 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.942730  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.942759  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.943381  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:53.943928  112516 store.go:1362] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0214 20:22:53.943982  112516 reflector.go:211] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0214 20:22:53.944183  112516 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.944340  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.944372  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.945023  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:53.946417  112516 reflector.go:211] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0214 20:22:53.946446  112516 store.go:1362] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0214 20:22:53.946470  112516 master.go:538] Enabling API group "coordination.k8s.io".
I0214 20:22:53.946718  112516 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.946841  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.946871  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.947263  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:53.997557  112516 store.go:1362] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I0214 20:22:53.997591  112516 master.go:538] Enabling API group "discovery.k8s.io".
I0214 20:22:53.997822  112516 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:53.997998  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:53.998027  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:53.998271  112516 reflector.go:211] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I0214 20:22:53.999725  112516 store.go:1362] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0214 20:22:53.999757  112516 master.go:538] Enabling API group "extensions".
I0214 20:22:53.999981  112516 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.000047  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.000116  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.000140  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.000230  112516 reflector.go:211] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0214 20:22:54.001478  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.002520  112516 store.go:1362] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0214 20:22:54.002611  112516 reflector.go:211] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0214 20:22:54.002757  112516 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.002909  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.002931  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.003572  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.003732  112516 store.go:1362] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0214 20:22:54.003751  112516 master.go:538] Enabling API group "networking.k8s.io".
I0214 20:22:54.003796  112516 reflector.go:211] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0214 20:22:54.003998  112516 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.004136  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.004157  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.004626  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.004788  112516 store.go:1362] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0214 20:22:54.004826  112516 reflector.go:211] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0214 20:22:54.004842  112516 master.go:538] Enabling API group "node.k8s.io".
I0214 20:22:54.005448  112516 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.005586  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.005605  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.006747  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.006920  112516 store.go:1362] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0214 20:22:54.006988  112516 reflector.go:211] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0214 20:22:54.007543  112516 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.007698  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.007720  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.007765  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.008716  112516 store.go:1362] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0214 20:22:54.008759  112516 master.go:538] Enabling API group "policy".
I0214 20:22:54.008815  112516 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.008947  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.008964  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.008983  112516 reflector.go:211] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0214 20:22:54.010017  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.010339  112516 store.go:1362] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0214 20:22:54.010420  112516 reflector.go:211] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0214 20:22:54.010541  112516 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.010667  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.010684  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.011065  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.011479  112516 store.go:1362] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0214 20:22:54.011558  112516 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.011672  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.011707  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.011738  112516 reflector.go:211] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0214 20:22:54.012935  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.013344  112516 store.go:1362] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0214 20:22:54.013697  112516 reflector.go:211] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0214 20:22:54.014258  112516 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.014585  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.014614  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.014873  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.016377  112516 store.go:1362] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0214 20:22:54.016467  112516 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.016535  112516 reflector.go:211] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0214 20:22:54.016585  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.016603  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.017870  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.018834  112516 store.go:1362] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0214 20:22:54.018930  112516 reflector.go:211] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0214 20:22:54.019034  112516 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.019141  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.019171  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.019800  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.020278  112516 reflector.go:211] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0214 20:22:54.020355  112516 store.go:1362] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0214 20:22:54.020419  112516 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.020545  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.020612  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.021454  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.021603  112516 store.go:1362] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0214 20:22:54.021661  112516 reflector.go:211] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0214 20:22:54.022002  112516 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.022139  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.022163  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.022903  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.023220  112516 store.go:1362] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0214 20:22:54.023245  112516 master.go:538] Enabling API group "rbac.authorization.k8s.io".
I0214 20:22:54.023481  112516 reflector.go:211] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0214 20:22:54.025824  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.026614  112516 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.026798  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.026823  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.027403  112516 store.go:1362] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0214 20:22:54.027489  112516 reflector.go:211] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0214 20:22:54.027618  112516 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.027755  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.027775  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.028614  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.028870  112516 store.go:1362] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0214 20:22:54.028887  112516 master.go:538] Enabling API group "scheduling.k8s.io".
I0214 20:22:54.029025  112516 master.go:527] Skipping disabled API group "settings.k8s.io".
I0214 20:22:54.029226  112516 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.029320  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.029346  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.030307  112516 reflector.go:211] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0214 20:22:54.030680  112516 store.go:1362] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0214 20:22:54.030873  112516 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.031099  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.031201  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.030901  112516 reflector.go:211] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0214 20:22:54.031804  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.033514  112516 watch_cache.go:449] Replace watchCache (rev: 29284) 
I0214 20:22:54.034044  112516 store.go:1362] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0214 20:22:54.034098  112516 reflector.go:211] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0214 20:22:54.034941  112516 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.035654  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.035774  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.037717  112516 store.go:1362] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0214 20:22:54.037888  112516 reflector.go:211] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0214 20:22:54.038163  112516 watch_cache.go:449] Replace watchCache (rev: 29285) 
I0214 20:22:54.037911  112516 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.039488  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.039734  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.040748  112516 store.go:1362] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0214 20:22:54.040976  112516 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.041110  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.041130  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.041620  112516 reflector.go:211] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0214 20:22:54.042661  112516 store.go:1362] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0214 20:22:54.042826  112516 reflector.go:211] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0214 20:22:54.042876  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.042999  112516 watch_cache.go:449] Replace watchCache (rev: 29285) 
I0214 20:22:54.046547  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.047057  112516 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.047844  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.048015  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.049262  112516 store.go:1362] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0214 20:22:54.049502  112516 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.050218  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.050424  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.050242  112516 reflector.go:211] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0214 20:22:54.051402  112516 store.go:1362] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0214 20:22:54.051425  112516 master.go:538] Enabling API group "storage.k8s.io".
I0214 20:22:54.051459  112516 master.go:527] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I0214 20:22:54.051612  112516 reflector.go:211] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0214 20:22:54.051680  112516 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.051818  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.051836  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.052749  112516 store.go:1362] Monitoring deployments.apps count at <storage-prefix>//deployments
I0214 20:22:54.052985  112516 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.053146  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.053171  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.053232  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.053345  112516 reflector.go:211] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0214 20:22:54.053376  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.054406  112516 store.go:1362] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0214 20:22:54.054507  112516 reflector.go:211] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0214 20:22:54.054620  112516 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.054854  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.054881  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.055661  112516 store.go:1362] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0214 20:22:54.055874  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.055783  112516 reflector.go:211] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0214 20:22:54.056395  112516 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.056510  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.056853  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.057078  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.058007  112516 store.go:1362] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0214 20:22:54.058233  112516 reflector.go:211] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0214 20:22:54.058210  112516 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.058380  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.058398  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.058513  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.059476  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.059767  112516 store.go:1362] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0214 20:22:54.059788  112516 master.go:538] Enabling API group "apps".
I0214 20:22:54.059831  112516 reflector.go:211] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0214 20:22:54.059985  112516 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.060096  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.060113  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.061184  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.061221  112516 store.go:1362] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0214 20:22:54.061399  112516 reflector.go:211] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0214 20:22:54.061425  112516 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.061531  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.061552  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.062979  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.063073  112516 store.go:1362] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0214 20:22:54.063253  112516 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.063283  112516 reflector.go:211] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0214 20:22:54.063392  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.063414  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.064276  112516 store.go:1362] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0214 20:22:54.064363  112516 reflector.go:211] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0214 20:22:54.064754  112516 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.065339  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.065436  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.067462  112516 store.go:1362] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0214 20:22:54.067492  112516 master.go:538] Enabling API group "admissionregistration.k8s.io".
I0214 20:22:54.067537  112516 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.067806  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.067833  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.068049  112516 reflector.go:211] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0214 20:22:54.068354  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.068403  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.070355  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.070863  112516 store.go:1362] Monitoring events count at <storage-prefix>//events
I0214 20:22:54.070888  112516 master.go:538] Enabling API group "events.k8s.io".
I0214 20:22:54.070928  112516 reflector.go:211] Listing and watching *core.Event from storage/cacher.go:/events
I0214 20:22:54.071172  112516 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.071427  112516 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.071712  112516 watch_cache.go:449] Replace watchCache (rev: 29286) 
I0214 20:22:54.071979  112516 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.072161  112516 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.072651  112516 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.072858  112516 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.073076  112516 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.073649  112516 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.073803  112516 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.073915  112516 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.075263  112516 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.075800  112516 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.077555  112516 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.078233  112516 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.079950  112516 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.080763  112516 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.082370  112516 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.082983  112516 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.084736  112516 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.085142  112516 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0214 20:22:54.085322  112516 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0214 20:22:54.086510  112516 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.091073  112516 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.091500  112516 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.093685  112516 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.094616  112516 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.096063  112516 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0214 20:22:54.096147  112516 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I0214 20:22:54.097072  112516 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.098137  112516 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.099201  112516 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.101938  112516 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.102408  112516 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.103119  112516 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0214 20:22:54.103186  112516 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0214 20:22:54.104355  112516 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.104899  112516 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.105578  112516 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.106256  112516 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.107078  112516 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.107634  112516 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.108223  112516 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.109253  112516 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.109703  112516 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.110839  112516 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.111554  112516 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0214 20:22:54.111617  112516 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0214 20:22:54.112212  112516 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.113283  112516 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0214 20:22:54.113354  112516 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0214 20:22:54.113916  112516 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.114463  112516 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.115078  112516 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.115980  112516 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.116826  112516 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.117464  112516 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.118198  112516 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.119416  112516 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0214 20:22:54.119515  112516 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0214 20:22:54.120411  112516 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.121623  112516 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.122002  112516 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.123279  112516 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.123660  112516 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.124373  112516 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.125052  112516 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.125307  112516 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.125620  112516 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.126675  112516 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.126963  112516 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.127258  112516 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0214 20:22:54.127335  112516 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0214 20:22:54.127344  112516 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0214 20:22:54.128023  112516 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.129027  112516 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.129718  112516 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.130672  112516 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.131542  112516 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.135833  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
W0214 20:22:54.135842  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 20:22:54.135861  112516 healthz.go:186] healthz check poststarthook/bootstrap-controller failed: not finished
I0214 20:22:54.135871  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.135881  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.135890  112516 healthz.go:186] healthz check poststarthook/start-cluster-authentication-info-controller failed: not finished
I0214 20:22:54.135897  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/start-cluster-authentication-info-controller failed: reason withheld
healthz check failed
I0214 20:22:54.135939  112516 httplog.go:90] verb="GET" URI="/healthz" latency=292.459µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34622": 
I0214 20:22:54.135948  112516 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0214 20:22:54.135965  112516 shared_informer.go:206] Waiting for caches to sync for cluster_authentication_trust_controller
I0214 20:22:54.136220  112516 reflector.go:175] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0214 20:22:54.136235  112516 reflector.go:211] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0214 20:22:54.136951  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0" latency=472.474µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34622": 
I0214 20:22:54.137615  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.888328ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.138050  112516 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=29283 labels= fields= timeout=6m38s
I0214 20:22:54.139901  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=976.337µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.145397  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.152961ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.147757  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.147782  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.147794  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.147804  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.147842  112516 httplog.go:90] verb="GET" URI="/healthz" latency=212.799µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.150005  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=2.010753ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34632": 
I0214 20:22:54.150947  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=879.891µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.151825  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.42466ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34632": 
I0214 20:22:54.157635  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=5.408108ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34632": 
I0214 20:22:54.157977  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=5.979266ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.160041  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.758165ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34632": 
I0214 20:22:54.160315  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.936651ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.163823  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.923831ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34636": 
I0214 20:22:54.163832  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency=2.074178ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.166095  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.817829ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.169569  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=2.054706ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.173365  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=3.422825ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.179925  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency=1.535619ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.236137  112516 shared_informer.go:236] caches populated
I0214 20:22:54.236173  112516 shared_informer.go:213] Caches are synced for cluster_authentication_trust_controller 
I0214 20:22:54.236707  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.236742  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.236755  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.236764  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.236818  112516 httplog.go:90] verb="GET" URI="/healthz" latency=267.652µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.248649  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.248698  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.248720  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.248736  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.248808  112516 httplog.go:90] verb="GET" URI="/healthz" latency=338.53µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.336855  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.336893  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.336906  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.336915  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.336965  112516 httplog.go:90] verb="GET" URI="/healthz" latency=288.001µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.357318  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.357351  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.357363  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.357372  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.357427  112516 httplog.go:90] verb="GET" URI="/healthz" latency=268.128µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.436769  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.436806  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.436818  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.436827  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.436884  112516 httplog.go:90] verb="GET" URI="/healthz" latency=249.84µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.448512  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.448551  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.448580  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.448595  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.448641  112516 httplog.go:90] verb="GET" URI="/healthz" latency=262.623µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.536780  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.536814  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.536835  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.536844  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.536894  112516 httplog.go:90] verb="GET" URI="/healthz" latency=331.06µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.548551  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.548641  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.548654  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.548665  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.548721  112516 httplog.go:90] verb="GET" URI="/healthz" latency=314.156µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.636825  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.636867  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.636878  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.636887  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.636956  112516 httplog.go:90] verb="GET" URI="/healthz" latency=292.693µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.648528  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.648949  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.648975  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.648985  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.649077  112516 httplog.go:90] verb="GET" URI="/healthz" latency=684.016µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.736768  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.736804  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.736816  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.736825  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.736872  112516 httplog.go:90] verb="GET" URI="/healthz" latency=273.846µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.748447  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.748495  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.748507  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.748515  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.748560  112516 httplog.go:90] verb="GET" URI="/healthz" latency=233.531µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.836727  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.836764  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.836777  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.836796  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.836841  112516 httplog.go:90] verb="GET" URI="/healthz" latency=284.06µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.851737  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.851771  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.851783  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.851797  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.851854  112516 httplog.go:90] verb="GET" URI="/healthz" latency=275.883µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.863554  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.863641  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.937751  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.937780  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.937790  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.937863  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.279971ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.950531  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.950559  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.950569  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.950644  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.187268ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.037711  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.037747  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:55.037757  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.037845  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.270116ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.049754  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.049781  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:55.049793  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.049869  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.419932ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.137186  112516 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical" latency=1.443241ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.137599  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.817505ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34636": 
I0214 20:22:55.138135  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.138160  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:55.138171  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.138232  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.520823ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:55.139877  112516 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=1.818883ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.140046  112516 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0214 20:22:55.140391  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.989272ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34636": 
I0214 20:22:55.141327  112516 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical" latency=1.060064ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.142481  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=1.3201ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34636": 
I0214 20:22:55.143596  112516 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=1.84435ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.143791  112516 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0214 20:22:55.143814  112516 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0214 20:22:55.144282  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=1.393369ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34636": 
I0214 20:22:55.145484  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=775.643µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.146676  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=764.496µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.147917  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=749.818µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.149096  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.149120  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.149184  112516 httplog.go:90] verb="GET" URI="/healthz" latency=807.569µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.149468  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=1.065318ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.151245  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=1.396993ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.152321  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin" latency=754.811µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.154413  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.608767ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.154811  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0214 20:22:55.168868  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery" latency=13.824306ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.171439  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.944257ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.171704  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0214 20:22:55.172974  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user" latency=986.107µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.175386  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.914927ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.175567  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0214 20:22:55.178824  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer" latency=2.819476ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.180987  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.745182ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.181197  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0214 20:22:55.182225  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=828.2µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.186169  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.567113ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.186414  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/admin
I0214 20:22:55.188103  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=1.52165ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.190641  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.88718ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.191404  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/edit
I0214 20:22:55.192517  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=877.058µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.194711  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.662475ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.195025  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/view
I0214 20:22:55.196923  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=1.724735ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.198957  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.592537ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.199522  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0214 20:22:55.205994  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=6.17096ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.209135  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.599705ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.212189  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0214 20:22:55.213385  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=942.37µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.217654  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=3.888654ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.218048  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0214 20:22:55.219419  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster" latency=1.149491ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.221898  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.912579ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.222135  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0214 20:22:55.223195  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node" latency=876.435µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.225591  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.026194ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.225894  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node
I0214 20:22:55.227212  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector" latency=1.093592ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.229291  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.677105ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.229553  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0214 20:22:55.230760  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin" latency=866.66µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.233908  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.686777ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.234158  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0214 20:22:55.236252  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper" latency=1.167202ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.237714  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.237744  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.237789  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.227428ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.239125  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.970651ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.239435  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0214 20:22:55.241565  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator" latency=1.949965ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.244844  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.850252ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.245094  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0214 20:22:55.246364  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator" latency=889.029µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.249186  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.903746ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.249382  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0214 20:22:55.249993  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.250014  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.250053  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.003877ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.251791  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager" latency=2.24246ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.255472  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.965408ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.255915  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0214 20:22:55.257944  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns" latency=1.243088ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.264371  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=5.443572ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.264663  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0214 20:22:55.267721  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner" latency=2.835128ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.271187  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.534113ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.271484  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0214 20:22:55.273219  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient" latency=1.520971ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.275451  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.880429ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.276126  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0214 20:22:55.277161  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" latency=835.211µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.282977  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=5.089694ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.283293  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0214 20:22:55.285022  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler" latency=1.403394ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.287411  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.790875ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.287612  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0214 20:22:55.288758  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier" latency=955.87µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.292771  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.348576ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.292978  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0214 20:22:55.294641  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler" latency=1.134378ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.297025  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.888066ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.297295  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0214 20:22:55.298417  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller" latency=943.812µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.300912  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.100385ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.301289  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0214 20:22:55.305260  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller" latency=3.707579ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.308452  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.659435ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.308813  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0214 20:22:55.310437  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller" latency=1.433265ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.313146  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.244333ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.313594  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0214 20:22:55.317972  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller" latency=4.160208ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.321020  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.339191ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.321264  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0214 20:22:55.322782  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller" latency=1.290272ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.325067  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.697358ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.325446  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0214 20:22:55.326424  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller" latency=792.97µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.328483  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.59717ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.329013  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0214 20:22:55.330006  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller" latency=795.469µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.331872  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.409977ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.332138  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0214 20:22:55.333461  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller" latency=1.092811ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.335828  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.802507ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.336052  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0214 20:22:55.337231  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller" latency=830.108µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.337982  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.338142  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.338335  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.682029ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.339291  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.665793ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.339508  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0214 20:22:55.340625  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector" latency=964.445µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.342592  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.591396ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.342799  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0214 20:22:55.344933  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler" latency=1.994768ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.347268  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.811658ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.347513  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0214 20:22:55.350183  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.350217  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.350282  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.484676ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.350342  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller" latency=1.253063ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.352733  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.883749ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.352944  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0214 20:22:55.353877  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller" latency=759.679µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.355747  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.424316ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.356045  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0214 20:22:55.357352  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller" latency=1.097791ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.359019  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.222273ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.359233  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0214 20:22:55.360404  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder" latency=838.276µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.362137  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.315647ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.363268  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0214 20:22:55.364182  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector" latency=726.497µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.374365  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.642874ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.374598  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0214 20:22:55.375707  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller" latency=887.463µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.378003  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.708122ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.378221  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0214 20:22:55.379238  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller" latency=788.48µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.381395  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.716936ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.381822  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0214 20:22:55.384602  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller" latency=2.540785ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.387010  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.876718ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.387284  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0214 20:22:55.388273  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller" latency=775.586µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.390210  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.525012ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.390646  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0214 20:22:55.391810  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller" latency=935.692µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.393639  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.498379ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.393866  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0214 20:22:55.395043  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller" latency=947.409µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.396870  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.468325ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.397070  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0214 20:22:55.398218  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller" latency=975.35µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.400522  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.913641ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.400916  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0214 20:22:55.402182  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller" latency=1.020221ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.409417  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=6.88847ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.409728  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0214 20:22:55.410873  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller" latency=992.006µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.413130  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.601715ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.413351  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0214 20:22:55.415368  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller" latency=1.636588ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.422229  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.98623ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.422481  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0214 20:22:55.423783  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller" latency=1.058985ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.426531  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.198006ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.426950  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0214 20:22:55.428007  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin" latency=852.956µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.437692  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.437717  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.437765  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.322286ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.438500  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.425059ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.438711  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0214 20:22:55.449433  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.449468  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.449534  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.145198ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.457275  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=1.32282ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.479065  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.773107ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.479455  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0214 20:22:55.499800  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user" latency=1.33261ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.518515  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.525025ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.519369  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0214 20:22:55.537433  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" latency=1.489615ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.538209  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.538234  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.538286  112516 httplog.go:90] verb="GET" URI="/healthz" latency=906.36µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.549359  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.549395  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.549460  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.082222ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.558220  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.292022ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.558489  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0214 20:22:55.577393  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier" latency=1.340847ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.598102  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.081858ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.598359  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0214 20:22:55.617335  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager" latency=1.337338ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.637516  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.637552  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.637611  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.181582ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:55.638238  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.221343ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.638893  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0214 20:22:55.649895  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.649934  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.650001  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.613279ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.662005  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns" latency=1.292002ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.678159  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.137145ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.678415  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0214 20:22:55.724319  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler" latency=4.217975ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.732737  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=7.859075ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.733028  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0214 20:22:55.737243  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler" latency=1.229435ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.739935  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.739959  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.740006  112516 httplog.go:90] verb="GET" URI="/healthz" latency=982.946µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.749332  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.749355  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.749408  112516 httplog.go:90] verb="GET" URI="/healthz" latency=966.515µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.758140  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.102259ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.758382  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0214 20:22:55.777985  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node" latency=1.635199ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.798264  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.283589ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.798538  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0214 20:22:55.817372  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller" latency=1.363058ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.841300  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.841345  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.841409  112516 httplog.go:90] verb="GET" URI="/healthz" latency=4.914968ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:55.842106  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=6.062214ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.842476  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0214 20:22:55.849397  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.849426  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.849495  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.086135ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.857150  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller" latency=1.192736ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.878251  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.282619ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.878502  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0214 20:22:55.897223  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller" latency=1.274ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.918494  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.432481ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.918793  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0214 20:22:55.937306  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller" latency=1.29368ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.937633  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.937664  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.937730  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.253013ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:55.949552  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.949580  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.949635  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.123379ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.957948  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.07958ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.958184  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0214 20:22:55.977437  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller" latency=1.4439ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.998176  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.204729ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.998569  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0214 20:22:56.017711  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller" latency=1.436462ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.037555  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.037594  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.037705  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.167655ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:56.038729  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.747438ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.038932  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0214 20:22:56.049978  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.050004  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.050064  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.059464ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.058002  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller" latency=1.102316ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.077921  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.029178ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.078181  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0214 20:22:56.100256  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller" latency=1.222783ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.118917  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.926756ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.119172  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0214 20:22:56.138057  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.138090  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.138138  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.455ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.138595  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller" latency=907.094µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.149758  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.149784  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.149855  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.460947ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.157817  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.970584ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.158084  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0214 20:22:56.177523  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector" latency=1.34311ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.198404  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.571142ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.198781  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0214 20:22:56.217383  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler" latency=1.389427ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.239083  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.239118  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.239202  112516 httplog.go:90] verb="GET" URI="/healthz" latency=2.751467ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.239202  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.258102ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.239451  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0214 20:22:56.249633  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.249663  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.249727  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.182698ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.257308  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller" latency=1.256993ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.278248  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.082166ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.278536  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0214 20:22:56.298400  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller" latency=1.307394ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.318421  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.393219ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.318704  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0214 20:22:56.337418  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller" latency=1.390131ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.337539  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.337559  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.337607  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.119972ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.349548  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.349579  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.349646  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.212546ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.358177  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.247681ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.358425  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0214 20:22:56.385362  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder" latency=9.338928ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.398443  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.418116ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.398698  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0214 20:22:56.422944  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector" latency=6.974037ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.438397  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.393484ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.438711  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0214 20:22:56.513790  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller" latency=57.450187ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35046": 
I0214 20:22:56.514395  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.514428  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.514503  112516 httplog.go:90] verb="GET" URI="/healthz" latency=66.158607ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.514601  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.514622  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.514661  112516 httplog.go:90] verb="GET" URI="/healthz" latency=77.703806ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:56.518041  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.058487ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35046": 
I0214 20:22:56.518315  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0214 20:22:56.519614  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller" latency=875.8µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.522222  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.16227ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.522447  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0214 20:22:56.537061  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller" latency=1.145238ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.537418  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.537441  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.537488  112516 httplog.go:90] verb="GET" URI="/healthz" latency=880.009µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:56.553054  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.553098  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.553183  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.125592ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.558026  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.022756ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.558251  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0214 20:22:56.577392  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller" latency=1.460517ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.598145  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.190394ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.598433  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0214 20:22:56.617168  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller" latency=1.229385ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.638132  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.161306ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.638250  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.638274  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.638324  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.432786ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.638367  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0214 20:22:56.649383  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.649410  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.649477  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.065606ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.657170  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller" latency=1.256126ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.677686  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.748983ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.677941  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0214 20:22:56.698353  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller" latency=2.43817ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.719708  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.747959ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.720181  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0214 20:22:56.739463  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller" latency=2.241957ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.739684  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.739705  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.739749  112516 httplog.go:90] verb="GET" URI="/healthz" latency=2.084568ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.749513  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.749546  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.749598  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.222434ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.758145  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.130451ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.758362  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0214 20:22:56.777191  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller" latency=1.229308ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.798158  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.955421ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.798408  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0214 20:22:56.817375  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller" latency=1.388276ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.858484  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.858528  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.858613  112516 httplog.go:90] verb="GET" URI="/healthz" latency=21.732143ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.859407  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=23.485255ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.859654  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0214 20:22:56.869415  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller" latency=9.510106ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.869617  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.869644  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.869688  112516 httplog.go:90] verb="GET" URI="/healthz" latency=21.013462ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35090": 
I0214 20:22:56.877877  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.88931ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35090": 
I0214 20:22:56.878086  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0214 20:22:56.932879  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader" latency=36.884496ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35090": 
I0214 20:22:57.036974  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.036989  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.037010  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.037016  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.037071  112516 httplog.go:90] verb="GET" URI="/healthz" latency=100.549261ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.037071  112516 httplog.go:90] verb="GET" URI="/healthz" latency=88.196012ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.037152  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=103.66847ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35090": 
I0214 20:22:57.039517  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.831479ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.039721  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0214 20:22:57.041104  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer" latency=1.156577ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.042931  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.281971ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.045496  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.030541ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.045783  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0214 20:22:57.046864  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider" latency=856.384µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.048552  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.024588ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.049307  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.049333  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.049394  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.037284ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:57.051308  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.01849ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.051498  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0214 20:22:57.052668  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner" latency=824.361µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.054295  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.134032ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.056396  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.657033ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.056663  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0214 20:22:57.058164  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager" latency=1.03939ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.059815  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.202604ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.078260  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.317073ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.078599  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0214 20:22:57.097157  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler" latency=1.24546ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.098866  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.21798ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.117949  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.987171ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.118390  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0214 20:22:57.140072  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.140116  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.140193  112516 httplog.go:90] verb="GET" URI="/healthz" latency=3.66232ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.140651  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer" latency=4.694025ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.142457  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.324122ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.149733  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.149763  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.149829  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.137283ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.157934  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles" latency=2.070177ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.158246  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0214 20:22:57.177372  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader" latency=1.247677ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.179246  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.356702ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.215023  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.961979ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.215293  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0214 20:22:57.217105  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager" latency=1.119304ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.218685  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.199992ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.238227  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.238269  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.238335  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.84568ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.238364  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.388675ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.238602  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0214 20:22:57.249595  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.249626  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.249678  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.20561ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.257347  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler" latency=1.355908ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.259076  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.266084ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.277787  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=1.800163ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.278225  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0214 20:22:57.297357  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer" latency=1.365536ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.299148  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.196101ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.317892  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=1.922049ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.318146  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0214 20:22:57.337236  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider" latency=1.227944ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.337500  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.337567  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.338160  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.111378ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.338865  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.16766ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.349347  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.349376  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.349470  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.075013ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.358120  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.191006ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.360330  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0214 20:22:57.377613  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner" latency=1.627457ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.379943  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.669344ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.398112  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.146424ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.398357  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0214 20:22:57.417519  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer" latency=1.550483ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.419591  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.476182ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.454375  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.454404  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.454511  112516 httplog.go:90] verb="GET" URI="/healthz" latency=8.882863ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.454835  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings" latency=10.05463ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.455051  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0214 20:22:57.456316  112516 httplog.go:90] verb="GET" URI="/healthz" latency=3.584356ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35154": 
I0214 20:22:57.460412  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default" latency=1.138577ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.463042  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.954677ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.465655  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/services/kubernetes" latency=2.130588ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.473138  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces/default/services" latency=6.95545ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.474834  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.123294ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.476772  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces/default/endpoints" latency=1.561946ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.478469  112516 httplog.go:90] verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=1.100206ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.480881  112516 httplog.go:90] verb="POST" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices" latency=1.917788ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.537815  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.182484ms resp=200 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:35104": 
W0214 20:22:57.538702  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:22:57.538734  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:22:57.538767  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 20:22:57.538824  112516 factory.go:167] Creating scheduler from algorithm provider 'DefaultProvider'
W0214 20:22:57.538898  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:22:57.539590  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:22:57.539610  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:22:57.539643  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:22:57.539708  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 20:22:57.540095  112516 reflector.go:175] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.540109  112516 reflector.go:211] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.540535  112516 reflector.go:175] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.540547  112516 reflector.go:211] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.540961  112516 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0" latency=556.099µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.540968  112516 reflector.go:175] Starting reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.540989  112516 reflector.go:211] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.541301  112516 reflector.go:175] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.541313  112516 reflector.go:211] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.541517  112516 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumes?limit=500&resourceVersion=0" latency=369.376µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:57.542198  112516 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=29283 labels= fields= timeout=5m21s
I0214 20:22:57.542261  112516 reflector.go:175] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.542275  112516 reflector.go:211] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.542312  112516 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0" latency=428.891µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35164": 
I0214 20:22:57.542011  112516 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=29283 labels= fields= timeout=9m25s
I0214 20:22:57.542844  112516 httplog.go:90] verb="GET" URI="/api/v1/nodes?limit=500&resourceVersion=0" latency=404.826µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35162": 
I0214 20:22:57.542850  112516 reflector.go:175] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.542875  112516 reflector.go:211] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.542914  112516 httplog.go:90] verb="GET" URI="/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0" latency=260.92µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35164": 
I0214 20:22:57.542959  112516 get.go:251] Starting watch for /apis/storage.k8s.io/v1/csinodes, rv=29286 labels= fields= timeout=6m16s
I0214 20:22:57.543304  112516 reflector.go:175] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.543318  112516 reflector.go:211] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.543306  112516 get.go:251] Starting watch for /api/v1/nodes, rv=29283 labels= fields= timeout=7m51s
I0214 20:22:57.543942  112516 httplog.go:90] verb="GET" URI="/api/v1/services?limit=500&resourceVersion=0" latency=829.992µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35162": 
I0214 20:22:57.543973  112516 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=29284 labels= fields= timeout=9m48s
I0214 20:22:57.544505  112516 get.go:251] Starting watch for /api/v1/services, rv=29417 labels= fields= timeout=7m4s
I0214 20:22:57.545095  112516 reflector.go:175] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.545126  112516 reflector.go:211] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I0214 20:22:57.546114  112516 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0" latency=395.153µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35172": 
I0214 20:22:57.546721  112516 httplog.go:90] verb="GET" URI="/api/v1/pods?limit=500&resourceVersion=0" latency=306.995µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35172": 
I0214 20:22:57.546935  112516 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=29286 labels= fields= timeout=5m53s
I0214 20:22:57.547317  112516 get.go:251] Starting watch for /api/v1/pods, rv=29283 labels= fields= timeout=9m56s
I0214 20:22:57.639989  112516 shared_informer.go:236] caches populated
I0214 20:22:57.640028  112516 shared_informer.go:236] caches populated
I0214 20:22:57.640035  112516 shared_informer.go:236] caches populated
I0214 20:22:57.640041  112516 shared_informer.go:236] caches populated
I0214 20:22:57.640047  112516 shared_informer.go:236] caches populated
I0214 20:22:57.640053  112516 shared_informer.go:236] caches populated
I0214 20:22:57.640058  112516 shared_informer.go:236] caches populated
I0214 20:22:57.640064  112516 shared_informer.go:236] caches populated
I0214 20:22:57.640123  112516 shared_informer.go:236] caches populated
I0214 20:22:57.643051  112516 httplog.go:90] verb="POST" URI="/api/v1/nodes" latency=2.218827ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35194": 
I0214 20:22:57.643661  112516 node_tree.go:86] Added node "test-node-0" in group "" to NodeTree
I0214 20:22:57.643684  112516 eventhandlers.go:103] add event for node "test-node-0"
I0214 20:22:57.779516  112516 node_tree.go:86] Added node "test-node-1" in group "" to NodeTree
I0214 20:22:57.779544  112516 eventhandlers.go:103] add event for node "test-node-1"
I0214 20:22:57.780015  112516 httplog.go:90] verb="POST" URI="/api/v1/nodes" latency=136.426203ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35194": 
I0214 20:22:57.782791  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods" latency=2.156488ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35194": 
I0214 20:22:57.783081  112516 eventhandlers.go:172] add event for unscheduled pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:57.783116  112516 scheduling_queue.go:821] About to try and schedule pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:57.783123  112516 scheduler.go:564] Attempting to schedule pod: postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
W0214 20:22:57.783244  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:22:57.783267  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:22:57.783276  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 20:22:57.783412  112516 scheduler_binder.go:279] AssumePodVolumes for pod "postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod", node "test-node-0"
I0214 20:22:57.783435  112516 scheduler_binder.go:289] AssumePodVolumes for pod "postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0214 20:22:57.783498  112516 default_binder.go:51] Attempting to bind postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod to test-node-0
I0214 20:22:57.973983  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod/binding" latency=190.223796ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35194": 
I0214 20:22:57.974850  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod" latency=86.218588ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:57.975268  112516 eventhandlers.go:204] delete event for unscheduled pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:57.975359  112516 eventhandlers.go:221] add event for scheduled pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod 
I0214 20:22:57.975492  112516 scheduler.go:706] pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod is bound successfully on node "test-node-0", 2 nodes evaluated, 2 nodes were found feasible.
I0214 20:22:57.978488  112516 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/events" latency=2.693664ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35194": 
I0214 20:22:57.982833  112516 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod" latency=7.396095ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:57.983227  112516 eventhandlers.go:270] delete event for scheduled pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod 
I0214 20:22:57.985447  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod" latency=1.099352ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:57.989106  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods" latency=3.211ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:57.989519  112516 eventhandlers.go:172] add event for unscheduled pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:57.989557  112516 scheduling_queue.go:821] About to try and schedule pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:57.989567  112516 scheduler.go:564] Attempting to schedule pod: postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:57.989791  112516 scheduler_binder.go:279] AssumePodVolumes for pod "postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod", node "test-node-1"
I0214 20:22:57.989807  112516 scheduler_binder.go:289] AssumePodVolumes for pod "postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod", node "test-node-1": all PVCs bound and nothing to do
E0214 20:22:57.989868  112516 framework.go:615] error while running "prebind-plugin" prebind plugin for pod "test-pod": injecting failure for pod test-pod
E0214 20:22:57.989886  112516 factory.go:415] Error scheduling postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod: error while running "prebind-plugin" prebind plugin for pod "test-pod": injecting failure for pod test-pod; retrying
I0214 20:22:57.989909  112516 scheduler.go:743] Updating pod condition for postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0214 20:22:57.994673  112516 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/events" latency=2.612198ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35232": 
I0214 20:22:57.994741  112516 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod/status" latency=3.636497ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:57.994938  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod" latency=4.259166ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35194": 
I0214 20:22:58.001073  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod" latency=1.403859ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:58.005203  112516 scheduling_queue.go:821] About to try and schedule pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:58.005253  112516 scheduler.go:724] Skip schedule deleting pod: postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:58.007370  112516 httplog.go:90] verb="DELETE" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod" latency=5.399408ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:58.012914  112516 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/events" latency=7.343633ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35232": 
I0214 20:22:58.012963  112516 eventhandlers.go:204] delete event for unscheduled pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:58.015534  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod" latency=1.164967ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35232": 
I0214 20:22:58.016209  112516 reflector.go:181] Stopping reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:58.016257  112516 reflector.go:181] Stopping reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:58.016278  112516 reflector.go:181] Stopping reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:58.016291  112516 reflector.go:181] Stopping reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:58.016311  112516 reflector.go:181] Stopping reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:58.016325  112516 reflector.go:181] Stopping reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:58.016338  112516 reflector.go:181] Stopping reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:58.016356  112516 reflector.go:181] Stopping reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I0214 20:22:58.016555  112516 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=29283&timeout=9m25s&timeoutSeconds=565&watch=true" latency=474.741016ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:58.016610  112516 httplog.go:90] verb="GET" URI="/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=29284&timeout=9m48s&timeoutSeconds=588&watch=true" latency=472.730541ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35168": 
I0214 20:22:58.016633  112516 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=29286&timeout=6m16s&timeoutSeconds=376&watch=true" latency=473.819391ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35166": 
I0214 20:22:58.016655  112516 httplog.go:90] verb="GET" URI="/api/v1/services?allowWatchBookmarks=true&resourceVersion=29417&timeout=7m4s&timeoutSeconds=424&watch=true" latency=472.293846ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35162": 
I0214 20:22:58.016555  112516 httplog.go:90] verb="GET" URI="/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=29283&timeout=5m21s&timeoutSeconds=321&watch=true" latency=474.591123ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:58.016760  112516 httplog.go:90] verb="GET" URI="/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=29286&timeout=5m53s&timeoutSeconds=353&watch=true" latency=469.963232ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35174": 
I0214 20:22:58.016777  112516 httplog.go:90] verb="GET" URI="/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=29283&timeout=7m51s&timeoutSeconds=471&watch=true" latency=473.612965ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35164": 
I0214 20:22:58.016806  112516 httplog.go:90] verb="GET" URI="/api/v1/pods?allowWatchBookmarks=true&resourceVersion=29283&timeout=9m56s&timeoutSeconds=596&watch=true" latency=469.64404ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35172": 
I0214 20:22:58.026633  112516 httplog.go:90] verb="DELETE" URI="/api/v1/nodes" latency=9.82868ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35232": 
I0214 20:22:58.026912  112516 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0214 20:22:58.028249  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.087774ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35232": 
I0214 20:22:58.030634  112516 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.870896ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35232": 
I0214 20:22:58.032552  112516 httplog.go:90] verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=961.034µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35232": 
I0214 20:22:58.034703  112516 httplog.go:90] verb="PUT" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency=1.647318ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35232": 
I0214 20:22:58.035049  112516 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0214 20:22:58.035102  112516 reflector.go:181] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0214 20:22:58.035289  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=29283&timeout=6m38s&timeoutSeconds=398&watch=true" latency=3.897445458s resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34622": 
--- FAIL: TestPostBindPlugin (4.17s)
    framework_test.go:1084: test #0: Expected the postbind plugin to be called, was called 0 times.
    framework_test.go:1077: test #1: Didn't expected the postbind plugin to be called 1 times.

				from junit_20200214-201559.xml

Find postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod mentions in log files | View test history on testgrid


Show 2480 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 46 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [0214 20:05:42] Call tree:
!!! [0214 20:05:42]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0214 20:05:42]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0214 20:05:42]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [0214 20:05:42]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [0214 20:05:42]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0214 20:05:42] Running kubeadm tests
+++ [0214 20:05:47] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0214 20:06:33] Running tests without code coverage
{"Time":"2020-02-14T20:08:03.755963646Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t49.547s\n"}
✓  cmd/kubeadm/test/cmd (49.547s)
... skipping 302 lines ...
+++ [0214 20:09:54] Building kube-controller-manager
+++ [0214 20:09:59] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0214 20:10:29] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0214 20:10:29.912818   55204 serving.go:313] Generated self-signed cert in-memory
W0214 20:10:30.726990   55204 authentication.go:410] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0214 20:10:30.727035   55204 authentication.go:268] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0214 20:10:30.727046   55204 authentication.go:292] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0214 20:10:30.727063   55204 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0214 20:10:30.727090   55204 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0214 20:10:30.727115   55204 controllermanager.go:161] Version: v1.18.0-alpha.5.131+fe85ca48d0096d
I0214 20:10:30.728209   55204 secure_serving.go:178] Serving securely on [::]:10257
I0214 20:10:30.728326   55204 tlsconfig.go:241] Starting DynamicServingCertificateController
I0214 20:10:30.728751   55204 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0214 20:10:30.728812   55204 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 39 lines ...
W0214 20:10:31.001045   55204 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:10:31.001083   55204 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 20:10:31.001108   55204 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 20:10:31.001124   55204 controllermanager.go:533] Started "disruption"
W0214 20:10:31.001640   55204 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 20:10:31.001679   55204 controllermanager.go:533] Started "csrapproving"
E0214 20:10:31.002364   55204 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0214 20:10:31.002395   55204 controllermanager.go:525] Skipping "service"
I0214 20:10:31.002526   55204 disruption.go:331] Starting disruption controller
I0214 20:10:31.002538   55204 shared_informer.go:206] Waiting for caches to sync for disruption
I0214 20:10:31.002585   55204 pvc_protection_controller.go:101] Starting PVC protection controller
I0214 20:10:31.002593   55204 shared_informer.go:206] Waiting for caches to sync for PVC protection
I0214 20:10:31.002653   55204 certificate_controller.go:118] Starting certificate controller "csrapproving"
... skipping 68 lines ...
W0214 20:10:31.259003   55204 controllermanager.go:512] "bootstrapsigner" is disabled
I0214 20:10:31.259153   55204 cleaner.go:82] Starting CSR cleaner controller
I0214 20:10:31.259340   55204 controllermanager.go:533] Started "persistentvolume-expander"
I0214 20:10:31.259466   55204 expand_controller.go:319] Starting expand controller
I0214 20:10:31.259475   55204 shared_informer.go:206] Waiting for caches to sync for expand
I0214 20:10:31.259498   55204 node_lifecycle_controller.go:77] Sending events to api server
E0214 20:10:31.259512   55204 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W0214 20:10:31.259519   55204 controllermanager.go:525] Skipping "cloud-node-lifecycle"
I0214 20:10:31.259754   55204 controllermanager.go:533] Started "replicationcontroller"
I0214 20:10:31.259861   55204 replica_set.go:181] Starting replicationcontroller controller
I0214 20:10:31.259872   55204 shared_informer.go:206] Waiting for caches to sync for ReplicationController
W0214 20:10:31.259990   55204 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 20:10:31.260014   55204 controllermanager.go:533] Started "serviceaccount"
... skipping 54 lines ...
I0214 20:10:31.777687   55204 shared_informer.go:206] Waiting for caches to sync for ReplicaSet
I0214 20:10:31.778033   55204 controllermanager.go:533] Started "statefulset"
I0214 20:10:31.778229   55204 stateful_set.go:146] Starting stateful set controller
I0214 20:10:31.778249   55204 shared_informer.go:206] Waiting for caches to sync for stateful set
I0214 20:10:31.778323   55204 controllermanager.go:533] Started "cronjob"
I0214 20:10:31.780041   55204 cronjob_controller.go:97] Starting CronJob Manager
W0214 20:10:31.792783   55204 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0214 20:10:31.796848   55204 shared_informer.go:213] Caches are synced for TTL 
I0214 20:10:31.860325   55204 shared_informer.go:213] Caches are synced for service account 
I0214 20:10:31.862323   51760 controller.go:606] quota admission added evaluator for: serviceaccounts
I0214 20:10:31.877254   55204 shared_informer.go:213] Caches are synced for namespace 
I0214 20:10:31.902769   55204 shared_informer.go:213] Caches are synced for certificate-csrapproving 
I0214 20:10:31.904448   55204 shared_informer.go:213] Caches are synced for PV protection 
... skipping 23 lines ...
(BSuccessful: --output json has correct server info
(B+++ [0214 20:10:32] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0214 20:10:32.278419   55204 shared_informer.go:213] Caches are synced for stateful set 
I0214 20:10:32.302768   55204 shared_informer.go:213] Caches are synced for disruption 
I0214 20:10:32.302812   55204 disruption.go:339] Sending events to api server.
I0214 20:10:32.304749   55204 shared_informer.go:213] Caches are synced for ClusterRoleAggregator 
E0214 20:10:32.318622   55204 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0214 20:10:32.325225   55204 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
Successful: --client --output json has correct client info
(BSuccessful: --client --output json has no server info
(B+++ [0214 20:10:32] Testing kubectl version: compare json output using additional --short flag
I0214 20:10:32.458705   55204 shared_informer.go:213] Caches are synced for resource quota 
I0214 20:10:32.465266   55204 shared_informer.go:213] Caches are synced for garbage collector 
I0214 20:10:32.465294   55204 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
... skipping 50 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0214 20:10:35] Creating namespace namespace-1581711035-4788
namespace/namespace-1581711035-4788 created
Context "test" modified.
+++ [0214 20:10:35] Testing RESTMapper
+++ [0214 20:10:35] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 57 lines ...
namespace/namespace-1581711040-19172 created
Context "test" modified.
+++ [0214 20:10:40] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 58 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 25 lines ...
namespace/namespace-1581711051-21070 created
Context "test" modified.
+++ [0214 20:10:52] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:155: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:156: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:157: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 411 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:189: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:197: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:201: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:205: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:209: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:214: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 19 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:258: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:264: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:268: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:274: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 206 lines ...
(Bpod/valid-pod patched
core.sh:517: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:522: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
(Bpod/valid-pod patched
core.sh:538: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0214 20:11:27] "kubectl patch with resourceVersion 552" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:562: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0214 20:11:28.767821   55204 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test replaced
core.sh:599: Successful get node node-v1-test {{.metadata.annotations.a}}: b
(Bnode "node-v1-test" deleted
core.sh:606: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(Bcore.sh:609: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
(BEdit cancelled, no changes made.
... skipping 22 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:632: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:636: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:640: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:644: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:648: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 83 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0214 20:11:39] Creating namespace namespace-1581711099-7154
namespace/namespace-1581711099-7154 created
Context "test" modified.
+++ [0214 20:11:39] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0214 20:11:39] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0214 20:11:43.268138   51760 client.go:361] parsed scheme: "endpoint"
I0214 20:11:43.268181   51760 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:11:43.276194   51760 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 12 lines ...
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0214 20:11:44] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 31 lines ...
I0214 20:11:48.139658   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711105-26784", Name:"nginx", UID:"39f6af7d-92eb-4d39-8b4f-c671cd59b8de", APIVersion:"apps/v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
I0214 20:11:48.142754   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711105-26784", Name:"nginx-8484dd655", UID:"61cf7296-44e1-483d-a353-d32bf4cb1a77", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-np88k
I0214 20:11:48.145577   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711105-26784", Name:"nginx-8484dd655", UID:"61cf7296-44e1-483d-a353-d32bf4cb1a77", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-djgt7
I0214 20:11:48.147869   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711105-26784", Name:"nginx-8484dd655", UID:"61cf7296-44e1-483d-a353-d32bf4cb1a77", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-k4j5v
apps.sh:149: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1581711105-26784\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1581711105-26784"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
I0214 20:11:53.741859   55204 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1581711096-15513
deployment.apps/nginx configured
I0214 20:11:57.732967   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711105-26784", Name:"nginx", UID:"3568cdd0-276d-4175-a3c8-d53de842fc93", APIVersion:"apps/v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
I0214 20:11:57.736973   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711105-26784", Name:"nginx-668b6c7744", UID:"b4bb3b10-66e1-4d3f-993c-6841762f7344", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-qrrs4
I0214 20:11:57.738983   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711105-26784", Name:"nginx-668b6c7744", UID:"b4bb3b10-66e1-4d3f-993c-6841762f7344", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-86vwv
I0214 20:11:57.741372   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711105-26784", Name:"nginx-668b6c7744", UID:"b4bb3b10-66e1-4d3f-993c-6841762f7344", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-rv5gw
... skipping 147 lines ...
+++ [0214 20:12:05] Creating namespace namespace-1581711125-24176
namespace/namespace-1581711125-24176 created
Context "test" modified.
+++ [0214 20:12:05] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1581711125-24176 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1581711125-24176 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0214 20:12:07.573584   66096 loader.go:375] Config loaded from file:  /tmp/tmp.FHIHdzzHpY/.kube/config
I0214 20:12:07.575639   66096 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0214 20:12:07.605579   66096 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0214 20:12:07.607309   66096 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 482 lines ...
Successful
message:NAME    DATA   AGE
one     0      1s
three   0      0s
two     0      0s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [0214 20:12:14] Creating namespace namespace-1581711134-15318
namespace/namespace-1581711134-15318 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 56 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-02-14T20:12:14Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1581711134-15318", "resourceVersion":"740", "selfLink":"/api/v1/namespaces/namespace-1581711134-15318/pods/valid-pod", "uid":"3b3c1c65-d750-4861-8fe5-8462d8d50c23"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-02-14T20:12:14Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1581711134-15318","resourceVersion":"740","selfLink":"/api/v1/namespaces/namespace-1581711134-15318/pods/valid-pod","uid":"3b3c1c65-d750-4861-8fe5-8462d8d50c23"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-02-14T20:12:14Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1581711134-15318 resourceVersion:740 selfLink:/api/v1/namespaces/namespace-1581711134-15318/pods/valid-pod uid:3b3c1c65-d750-4861-8fe5-8462d8d50c23] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 45 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 42 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 35 lines ...
+++ command: run_kubectl_exec_pod_tests
+++ [0214 20:12:20] Creating namespace namespace-1581711140-25918
namespace/namespace-1581711140-25918 created
Context "test" modified.
+++ [0214 20:12:20] Testing kubectl exec POD COMMAND
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 2 lines ...
+++ command: run_kubectl_exec_resource_name_tests
+++ [0214 20:12:21] Creating namespace namespace-1581711141-20836
namespace/namespace-1581711141-20836 created
Context "test" modified.
+++ [0214 20:12:21] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:error: the server doesn't have a resource type "foo"
has:error:
Successful
message:Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0214 20:12:21.944223   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711141-20836", Name:"frontend", UID:"29042ff3-7655-4035-866a-3499141c102e", APIVersion:"apps/v1", ResourceVersion:"800", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v264p
I0214 20:12:21.947287   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711141-20836", Name:"frontend", UID:"29042ff3-7655-4035-866a-3499141c102e", APIVersion:"apps/v1", ResourceVersion:"800", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lffk9
I0214 20:12:21.947426   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711141-20836", Name:"frontend", UID:"29042ff3-7655-4035-866a-3499141c102e", APIVersion:"apps/v1", ResourceVersion:"800", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7792f
configmap/test-set-env-config created
Successful
message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
Successful
message:Error from server (BadRequest): pod frontend-7792f does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod frontend-7792f does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
W0214 20:12:22.885344   67266 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
W0214 20:12:23.049220   67296 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Successful
... skipping 4 lines ...
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"39a2179c-0959-4a69-95b6-2aa9bc9bc32a","resourceVersion":"821","creationTimestamp":"2020-02-14T20:12:23Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"39a2179c-0959-4a69-95b6-2aa9bc9bc32a"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 158 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0214 20:12:33] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 194 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0214 20:12:49] Testing recursive resources
+++ [0214 20:12:49] Creating namespace namespace-1581711169-27821
namespace/namespace-1581711169-27821 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0214 20:12:49.881608   51760 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 20:12:49.882772   55204 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 20:12:49.883421   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0214 20:12:49.988724   51760 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 20:12:49.989916   55204 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 20:12:49.990622   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0214 20:12:50.103274   51760 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 20:12:50.104530   55204 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 20:12:50.105312   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0214 20:12:50.243830   51760 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 20:12:50.245026   55204 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 20:12:50.245794   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1581711169-27821
Priority:     0
Node:         <none>
... skipping 159 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0214 20:12:51.878956   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx created
I0214 20:12:52.057379   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711169-27821", Name:"nginx", UID:"b5b007ab-6c3a-4435-a0d8-902f083db8fb", APIVersion:"apps/v1", ResourceVersion:"995", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I0214 20:12:52.060626   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711169-27821", Name:"nginx-f87d999f7", UID:"090020ea-ca15-4cf2-bc24-1ce567b14aed", APIVersion:"apps/v1", ResourceVersion:"996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-kkhhc
I0214 20:12:52.064722   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711169-27821", Name:"nginx-f87d999f7", UID:"090020ea-ca15-4cf2-bc24-1ce567b14aed", APIVersion:"apps/v1", ResourceVersion:"996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-qhlmv
I0214 20:12:52.065659   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711169-27821", Name:"nginx-f87d999f7", UID:"090020ea-ca15-4cf2-bc24-1ce567b14aed", APIVersion:"apps/v1", ResourceVersion:"996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-49p4j
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
... skipping 47 lines ...
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0214 20:12:52.834247   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0214 20:12:52.979535   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
E0214 20:12:53.189396   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
I0214 20:12:53.569286   55204 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0214 20:12:54.104496   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711169-27821", Name:"busybox0", UID:"5641de34-8f16-43e7-8c93-ee8672a9edeb", APIVersion:"v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-h6hhx
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0214 20:12:54.108764   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711169-27821", Name:"busybox1", UID:"3fd569bd-96ac-46dd-87c1-6eb484537110", APIVersion:"v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-nrv9m
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0214 20:12:55.872819   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711169-27821", Name:"busybox0", UID:"5641de34-8f16-43e7-8c93-ee8672a9edeb", APIVersion:"v1", ResourceVersion:"1054", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-48hs8
I0214 20:12:55.883245   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711169-27821", Name:"busybox1", UID:"3fd569bd-96ac-46dd-87c1-6eb484537110", APIVersion:"v1", ResourceVersion:"1058", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-8djxb
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0214 20:12:56.613716   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711169-27821", Name:"nginx1-deployment", UID:"16c74d34-47e4-4942-9371-2575032ab004", APIVersion:"apps/v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
I0214 20:12:56.617203   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711169-27821", Name:"nginx0-deployment", UID:"80c7c1dc-a1a9-4885-9ecd-cf7e6a560f1b", APIVersion:"apps/v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
I0214 20:12:56.619372   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711169-27821", Name:"nginx1-deployment-7bdbbfb5cf", UID:"b01ebf07-cafe-4e90-a331-18b9e42343e7", APIVersion:"apps/v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-rdqkl
I0214 20:12:56.622334   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711169-27821", Name:"nginx0-deployment-57c6bff7f6", UID:"d015f922-f466-401f-ae97-050877357912", APIVersion:"apps/v1", ResourceVersion:"1077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-q7lks
I0214 20:12:56.622369   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711169-27821", Name:"nginx1-deployment-7bdbbfb5cf", UID:"b01ebf07-cafe-4e90-a331-18b9e42343e7", APIVersion:"apps/v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-6lgjn
I0214 20:12:56.626042   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711169-27821", Name:"nginx0-deployment-57c6bff7f6", UID:"d015f922-f466-401f-ae97-050877357912", APIVersion:"apps/v1", ResourceVersion:"1077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-slv66
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(BE0214 20:12:56.793114   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
E0214 20:12:57.935936   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0214 20:12:59.001801   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711169-27821", Name:"busybox0", UID:"423e2888-e6b7-42ac-a39a-54349c1a7755", APIVersion:"v1", ResourceVersion:"1124", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-6bl96
I0214 20:12:59.007750   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711169-27821", Name:"busybox1", UID:"6340b2ed-7a15-4ce2-921f-0cd95a7fefb8", APIVersion:"v1", ResourceVersion:"1126", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-9cq7j
E0214 20:12:59.065227   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:no rollbacker has been implemented for "ReplicationController"
Successful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
E0214 20:12:59.385946   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0214 20:13:00] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1384: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
E0214 20:13:03.298143   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 20:13:05.683901   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0214 20:13:06.216415   55204 shared_informer.go:206] Waiting for caches to sync for resource quota
I0214 20:13:06.216482   55204 shared_informer.go:213] Caches are synced for resource quota 
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
I0214 20:13:06.725226   55204 shared_informer.go:206] Waiting for caches to sync for garbage collector
I0214 20:13:06.725302   55204 shared_informer.go:213] Caches are synced for garbage collector 
core.sh:1393: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
... skipping 30 lines ...
namespace "namespace-1581711144-21913" deleted
namespace "namespace-1581711144-26499" deleted
namespace "namespace-1581711145-27645" deleted
namespace "namespace-1581711147-23543" deleted
namespace "namespace-1581711149-22892" deleted
namespace "namespace-1581711169-27821" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1581711032-23725" deleted
... skipping 26 lines ...
namespace "namespace-1581711144-21913" deleted
namespace "namespace-1581711144-26499" deleted
namespace "namespace-1581711145-27645" deleted
namespace "namespace-1581711147-23543" deleted
namespace "namespace-1581711149-22892" deleted
namespace "namespace-1581711169-27821" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1400: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1401: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
core.sh:1405: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created
core.sh:1408: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: found:
(Bresourcequota "test-quota" deleted
I0214 20:13:07.821373   55204 resource_quota_controller.go:306] Resource quota has been deleted quotas/test-quota
namespace "quotas" deleted
E0214 20:13:08.573116   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0214 20:13:09.565703   55204 horizontal.go:354] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1581711169-27821
I0214 20:13:09.568715   55204 horizontal.go:354] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1581711169-27821
E0214 20:13:09.879213   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1420: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1424: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1428: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1432: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1434: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1441: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1445: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 103 lines ...
secret/test-secret created
core.sh:820: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:821: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(Bsecret "test-secret" deleted
secret/secret-string-data created
core.sh:843: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(BE0214 20:13:22.297454   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:844: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:845: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:854: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
I0214 20:13:24.304912   55204 namespace_controller.go:185] Namespace has been deleted other
E0214 20:13:27.617353   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_configmap_tests
+++ [0214 20:13:28] Creating namespace namespace-1581711208-13402
E0214 20:13:28.083219   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1581711208-13402 created
Context "test" modified.
+++ [0214 20:13:28] Testing configmaps
configmap/test-configmap created
core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
(Bconfigmap "test-configmap" deleted
E0214 20:13:28.619912   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(Bnamespace/test-configmaps created
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created (dry run)
... skipping 16 lines ...
+++ command: run_client_config_tests
+++ [0214 20:13:35] Creating namespace namespace-1581711215-16431
namespace/namespace-1581711215-16431 created
Context "test" modified.
+++ [0214 20:13:35] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 43 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 38 lines ...
                job-name=test-job
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Fri, 14 Feb 2020 20:13:44 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=f566bc01-2eee-463e-aa72-e8d45c56437b
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 365 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:952: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
I0214 20:13:55.355825   55204 namespace_controller.go:185] Namespace has been deleted test-jobs
core.sh:965: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:972: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:976: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
... skipping 41 lines ...
core.sh:1080: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BFlag --service-overrides has been deprecated, and will be removed in the future.
service/testmetadata created
pod/testmetadata created
core.sh:1084: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
(Bcore.sh:1085: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
(BE0214 20:13:59.964165   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/exposemetadata exposed
core.sh:1091: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
(Bservice "exposemetadata" deleted
service "testmetadata" deleted
pod "testmetadata" deleted
+++ exit code: 0
... skipping 36 lines ...
+++ [0214 20:14:02] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdaemonset.apps/bind created
apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1581711242-21108"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
E0214 20:14:02.918580   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind configured
apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
... skipping 13 lines ...
 (dry run)
daemonset.apps/bind rolled back (server dry run)
apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:85: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:86: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
E0214 20:14:04.351401   55204 daemon_controller.go:292] namespace-1581711242-21108/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1581711242-21108", SelfLink:"/apis/apps/v1/namespaces/namespace-1581711242-21108/daemonsets/bind", UID:"a1eab1da-4ee9-43ae-9a33-2a0a1a03aa0b", ResourceVersion:"1643", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717308042, loc:(*time.Location)(0x6c69560)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1581711242-21108\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001d38240), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002de4238), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002caaa80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001d38280), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000506da0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002de428c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0214 20:14:04.934510   55204 daemon_controller.go:292] namespace-1581711242-21108/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1581711242-21108", SelfLink:"/apis/apps/v1/namespaces/namespace-1581711242-21108/daemonsets/bind", UID:"a1eab1da-4ee9-43ae-9a33-2a0a1a03aa0b", ResourceVersion:"1646", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717308042, loc:(*time.Location)(0x6c69560)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1581711242-21108\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001c7a2c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002ea9f98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001a05ec0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001c7a2e0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000c36948)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002ea9fec)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
... skipping 32 lines ...
Namespace:    namespace-1581711245-26852
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1581711245-26852
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1581711245-26852
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1581711245-26852
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1581711245-26852
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1581711245-26852
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1581711245-26852
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1581711245-26852
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 15 lines ...
(Bcore.sh:1150: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0214 20:14:07.483283   55204 replica_set.go:200] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1581711245-26852 /api/v1/namespaces/namespace-1581711245-26852/replicationcontrollers/frontend 9dfec7a4-a2bc-471f-8aed-dc133a2ae683 1681 2 2020-02-14 20:14:06 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0030f9cf8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0214 20:14:07.491079   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711245-26852", Name:"frontend", UID:"9dfec7a4-a2bc-471f-8aed-dc133a2ae683", APIVersion:"v1", ResourceVersion:"1681", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-n4r4w
core.sh:1154: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1158: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1162: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1166: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0214 20:14:08.112118   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711245-26852", Name:"frontend", UID:"9dfec7a4-a2bc-471f-8aed-dc133a2ae683", APIVersion:"v1", ResourceVersion:"1689", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xgvkv
core.sh:1170: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1174: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 31 lines ...
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0214 20:14:10.481876   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment", UID:"f16649e9-4a25-4251-bdbb-3f4997b575c6", APIVersion:"apps/v1", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0214 20:14:10.484641   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-6986c7bc94", UID:"b3013c90-cf0b-4787-b036-cf011a76f2bf", APIVersion:"apps/v1", ResourceVersion:"1795", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-42945
I0214 20:14:10.487349   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-6986c7bc94", UID:"b3013c90-cf0b-4787-b036-cf011a76f2bf", APIVersion:"apps/v1", ResourceVersion:"1795", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-qrnvr
... skipping 4 lines ...
(Bdeployment.apps "nginx-deployment" deleted
service "nginx-deployment" deleted
replicationcontroller/frontend created
I0214 20:14:11.099401   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711245-26852", Name:"frontend", UID:"58124b2a-3572-438b-8998-53dd66f99075", APIVersion:"v1", ResourceVersion:"1825", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4mps9
I0214 20:14:11.104818   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711245-26852", Name:"frontend", UID:"58124b2a-3572-438b-8998-53dd66f99075", APIVersion:"v1", ResourceVersion:"1825", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zqrfq
I0214 20:14:11.105856   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711245-26852", Name:"frontend", UID:"58124b2a-3572-438b-8998-53dd66f99075", APIVersion:"v1", ResourceVersion:"1825", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h6tml
E0214 20:14:11.163844   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1228: Successful get rc frontend {{.spec.replicas}}: 3
(BE0214 20:14:11.279436   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend exposed
core.sh:1232: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bservice/frontend-2 exposed
core.sh:1236: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
(Bpod/valid-pod created
service/frontend-3 exposed
... skipping 6 lines ...
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1317: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1321: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1330: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 25 lines ...
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
W0214 20:14:15.856124   78868 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0214 20:14:16.058322   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources", UID:"c52d1277-b132-4d78-a08b-deed952a9e09", APIVersion:"apps/v1", ResourceVersion:"1962", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
I0214 20:14:16.061574   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources-67f8cfff5", UID:"e1c7e293-c710-4125-809c-7ee9de193fc5", APIVersion:"apps/v1", ResourceVersion:"1963", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-fkcnm
I0214 20:14:16.064936   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources-67f8cfff5", UID:"e1c7e293-c710-4125-809c-7ee9de193fc5", APIVersion:"apps/v1", ResourceVersion:"1963", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-mt8k4
I0214 20:14:16.064977   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources-67f8cfff5", UID:"e1c7e293-c710-4125-809c-7ee9de193fc5", APIVersion:"apps/v1", ResourceVersion:"1963", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-7z4wq
core.sh:1336: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0214 20:14:16.443634   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources", UID:"c52d1277-b132-4d78-a08b-deed952a9e09", APIVersion:"apps/v1", ResourceVersion:"1976", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
I0214 20:14:16.448647   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources-55c547f795", UID:"5d3adfeb-4071-4ccc-9af6-72c5116e0857", APIVersion:"apps/v1", ResourceVersion:"1977", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-cd4v9
core.sh:1341: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1342: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0214 20:14:16.814342   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources", UID:"c52d1277-b132-4d78-a08b-deed952a9e09", APIVersion:"apps/v1", ResourceVersion:"1986", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-55c547f795 to 0
I0214 20:14:16.821401   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources-55c547f795", UID:"5d3adfeb-4071-4ccc-9af6-72c5116e0857", APIVersion:"apps/v1", ResourceVersion:"1990", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-55c547f795-cd4v9
I0214 20:14:16.822188   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources", UID:"c52d1277-b132-4d78-a08b-deed952a9e09", APIVersion:"apps/v1", ResourceVersion:"1988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
I0214 20:14:16.826034   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711245-26852", Name:"nginx-deployment-resources-6d86564b45", UID:"7731b39d-d2af-40d2-b4ec-ac7db58e79e5", APIVersion:"apps/v1", ResourceVersion:"1994", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-fjzfc
core.sh:1347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 81 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1357: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1358: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
                pod-template-hash=79b9bd9585
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=79b9bd9585
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 102 lines ...
apps.sh:301: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:305: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:309: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:312: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:316: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0214 20:14:28.565369   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711258-16035", Name:"nginx", UID:"1bc120e6-7ddb-44bb-b1dc-c13faf833275", APIVersion:"apps/v1", ResourceVersion:"2217", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-f87d999f7 to 2
I0214 20:14:28.573105   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711258-16035", Name:"nginx", UID:"1bc120e6-7ddb-44bb-b1dc-c13faf833275", APIVersion:"apps/v1", ResourceVersion:"2220", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-68b5549748 to 1
I0214 20:14:28.574836   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711258-16035", Name:"nginx-f87d999f7", UID:"2fb11c23-9c19-436f-9e48-c273184c006a", APIVersion:"apps/v1", ResourceVersion:"2221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-f87d999f7-szbdw
I0214 20:14:28.577700   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711258-16035", Name:"nginx-68b5549748", UID:"b8f275d5-31f3-4d2f-bc08-5f84765be4a3", APIVersion:"apps/v1", ResourceVersion:"2224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-68b5549748-9924x
Successful
... skipping 79 lines ...
(Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0214 20:14:31.404986   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711258-16035", Name:"nginx-deployment", UID:"2cb48fa6-6c20-4c49-b772-6de95fcd208e", APIVersion:"apps/v1", ResourceVersion:"2288", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
I0214 20:14:31.408311   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711258-16035", Name:"nginx-deployment-59df9b5f5b", UID:"67b156e3-8d7e-45dd-8bc8-1a796a88484a", APIVersion:"apps/v1", ResourceVersion:"2289", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-dspcz
apps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:369: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 48 lines ...
I0214 20:14:35.348490   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711258-16035", Name:"nginx-deployment", UID:"de5b1609-a131-4b00-bfdd-7ed1ce133637", APIVersion:"apps/v1", ResourceVersion:"2428", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6b9f7756b4 to 0
I0214 20:14:35.398649   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711258-16035", Name:"nginx-deployment", UID:"de5b1609-a131-4b00-bfdd-7ed1ce133637", APIVersion:"apps/v1", ResourceVersion:"2430", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-98b7fd455 to 1
deployment.apps/nginx-deployment env updated
I0214 20:14:35.454614   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711258-16035", Name:"nginx-deployment-6b9f7756b4", UID:"22c2f9fc-79a6-4754-875c-62992beafb33", APIVersion:"apps/v1", ResourceVersion:"2431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6b9f7756b4-nxgnd
deployment.apps/nginx-deployment env updated
deployment.apps "nginx-deployment" deleted
E0214 20:14:35.651851   55204 replica_set.go:535] sync "namespace-1581711258-16035/nginx-deployment-98b7fd455" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-98b7fd455": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1581711258-16035/nginx-deployment-98b7fd455, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: fd20d978-d3db-4610-887a-2d538063ab75, UID in object meta: 
configmap "test-set-env-config" deleted
secret "test-set-env-secret" deleted
E0214 20:14:35.849576   55204 replica_set.go:535] sync "namespace-1581711258-16035/nginx-deployment-d74969475" failed with replicasets.apps "nginx-deployment-d74969475" not found
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests
E0214 20:14:35.899451   55204 replica_set.go:535] sync "namespace-1581711258-16035/nginx-deployment-6b9f7756b4" failed with replicasets.apps "nginx-deployment-6b9f7756b4" not found

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0214 20:14:35] Creating namespace namespace-1581711275-5108
E0214 20:14:35.949320   55204 replica_set.go:535] sync "namespace-1581711258-16035/nginx-deployment-868b664cb5" failed with replicasets.apps "nginx-deployment-868b664cb5" not found
namespace/namespace-1581711275-5108 created
Context "test" modified.
+++ [0214 20:14:36] Testing kubectl(v1:replicasets)
apps.sh:533: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0214 20:14:36.357621   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711275-5108", Name:"frontend", UID:"3d5fbcd2-caca-4604-b2d9-a7ab3eecfae2", APIVersion:"apps/v1", ResourceVersion:"2464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-m7cnt
I0214 20:14:36.360427   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711275-5108", Name:"frontend", UID:"3d5fbcd2-caca-4604-b2d9-a7ab3eecfae2", APIVersion:"apps/v1", ResourceVersion:"2464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-prkmr
I0214 20:14:36.361499   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711275-5108", Name:"frontend", UID:"3d5fbcd2-caca-4604-b2d9-a7ab3eecfae2", APIVersion:"apps/v1", ResourceVersion:"2464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dhm72
+++ [0214 20:14:36] Deleting rs
replicaset.apps "frontend" deleted
E0214 20:14:36.549045   55204 replica_set.go:535] sync "namespace-1581711275-5108/frontend" failed with replicasets.apps "frontend" not found
apps.sh:539: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:543: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0214 20:14:36.837691   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711275-5108", Name:"frontend", UID:"4e9430b1-382f-42e9-a8f7-ad39d9b2bc4f", APIVersion:"apps/v1", ResourceVersion:"2480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lznds
I0214 20:14:36.840041   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711275-5108", Name:"frontend", UID:"4e9430b1-382f-42e9-a8f7-ad39d9b2bc4f", APIVersion:"apps/v1", ResourceVersion:"2480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jcz7r
I0214 20:14:36.840672   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711275-5108", Name:"frontend", UID:"4e9430b1-382f-42e9-a8f7-ad39d9b2bc4f", APIVersion:"apps/v1", ResourceVersion:"2480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b6wmf
apps.sh:547: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0214 20:14:36] Deleting rs
replicaset.apps "frontend" deleted
E0214 20:14:37.049074   55204 replica_set.go:535] sync "namespace-1581711275-5108/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1581711275-5108/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 4e9430b1-382f-42e9-a8f7-ad39d9b2bc4f, UID in object meta: 
apps.sh:551: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:553: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-b6wmf" deleted
pod "frontend-jcz7r" deleted
pod "frontend-lznds" deleted
apps.sh:556: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1581711275-5108
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1581711275-5108
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1581711275-5108
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 13 lines ...
Namespace:    namespace-1581711275-5108
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1581711275-5108
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1581711275-5108
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1581711275-5108
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1581711275-5108
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 125 lines ...
I0214 20:14:40.307632   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711275-5108", Name:"scale-1", UID:"c046a333-6778-4865-93eb-d880377953d2", APIVersion:"apps/v1", ResourceVersion:"2548", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 2
deployment.apps/scale-2 scaled
I0214 20:14:40.312417   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711275-5108", Name:"scale-2", UID:"0774ea77-f51c-4115-abb1-d83ce82cc071", APIVersion:"apps/v1", ResourceVersion:"2550", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 2
I0214 20:14:40.314425   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711275-5108", Name:"scale-1-5c5565bcd9", UID:"6da00082-740f-455c-80a6-12a88cc8b823", APIVersion:"apps/v1", ResourceVersion:"2549", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-gndqb
I0214 20:14:40.316370   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711275-5108", Name:"scale-2-5c5565bcd9", UID:"ebdfa7fe-deca-42c2-bfb2-0baf42cb17d6", APIVersion:"apps/v1", ResourceVersion:"2554", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-xvjt9
apps.sh:601: Successful get deploy scale-1 {{.spec.replicas}}: 2
(BE0214 20:14:40.438255   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:602: Successful get deploy scale-2 {{.spec.replicas}}: 2
(Bapps.sh:603: Successful get deploy scale-3 {{.spec.replicas}}: 1
(Bdeployment.apps/scale-1 scaled
I0214 20:14:40.750325   55204 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581711275-5108", Name:"scale-1", UID:"c046a333-6778-4865-93eb-d880377953d2", APIVersion:"apps/v1", ResourceVersion:"2568", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 3
deployment.apps/scale-2 scaled
I0214 20:14:40.753604   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581711275-5108", Name:"scale-1-5c5565bcd9", UID:"6da00082-740f-455c-80a6-12a88cc8b823", APIVersion:"apps/v1", ResourceVersion:"2569", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-8p2j7
... skipping 23 lines ...
service "frontend-2" deleted
apps.sh:630: Successful get rs frontend {{.metadata.generation}}: 1
(Breplicaset.apps/frontend image updated
apps.sh:632: Successful get rs frontend {{.metadata.generation}}: 2
(Breplicaset.apps/frontend env updated
apps.sh:634: Successful get rs frontend {{.metadata.generation}}: 3
(BE0214 20:14:42.555073   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend resource requirements updated (dry run)
replicaset.apps/frontend resource requirements updated (server dry run)
apps.sh:637: Successful get rs frontend {{.metadata.generation}}: 3
(Breplicaset.apps/frontend resource requirements updated
apps.sh:639: Successful get rs frontend {{.metadata.generation}}: 4
(Breplicaset.apps/frontend serviceaccount updated (dry run)
... skipping 26 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:680: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:684: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 61 lines ...
(Bapps.sh:458: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:459: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:462: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:463: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:467: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:468: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:471: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:472: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 31 lines ...
Testing with file hack/testdata/multi-resource-yaml.yaml and replace with file hack/testdata/multi-resource-yaml-modify.yaml
generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/mock created
replicationcontroller/mock created
I0214 20:14:51.957744   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711291-15125", Name:"mock", UID:"eb4c4b1b-dc5c-4857-9e2b-98f59252af27", APIVersion:"v1", ResourceVersion:"2796", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-pzl4v
E0214 20:14:51.974985   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock:
(Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock:
(BNAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/mock   ClusterIP   10.0.0.189   <none>        99/TCP    1s

NAME                         DESIRED   CURRENT   READY   AGE
... skipping 15 lines ...
Name:         mock
Namespace:    namespace-1581711291-15125
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 56 lines ...
Name:         mock
Namespace:    namespace-1581711291-15125
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 56 lines ...
Name:         mock
Namespace:    namespace-1581711291-15125
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 41 lines ...
Namespace:    namespace-1581711291-15125
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1581711291-15125
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 60 lines ...
IP:                10.0.0.71
Port:              <unset>  99/TCP
TargetPort:        9949/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
E0214 20:15:01.757616   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "mock" deleted
service "mock2" deleted
service/mock replaced
service/mock2 replaced
generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced
(Bgeneric-resources.sh:98: Successful get services mock2 {{.metadata.labels.status}}: replaced
... skipping 38 lines ...
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0214 20:15:05.427583   55204 pv_protection_controller.go:118] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bpersistentvolume "pv0003" deleted
storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0214 20:15:05.874792   55204 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:warning: deleting cluster-scoped resources
Successful
... skipping 529 lines ...
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
... skipping 39 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:812: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:813: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:814: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:815: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 20 lines ...
replicationcontroller/cassandra created
I0214 20:15:13.634862   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711313-28403", Name:"cassandra", UID:"85d22fae-7d79-40ff-ace0-c6b4ccdbb871", APIVersion:"v1", ResourceVersion:"3110", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-p4zbj
I0214 20:15:13.638238   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711313-28403", Name:"cassandra", UID:"85d22fae-7d79-40ff-ace0-c6b4ccdbb871", APIVersion:"v1", ResourceVersion:"3110", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-w42hb
service/cassandra created
Waiting for Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}} : expected: cassandra:cassandra:cassandra:cassandra::, got: cassandra:cassandra:cassandra:cassandra:

discovery.sh:91: FAIL!
Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}
  Expected: cassandra:cassandra:cassandra:cassandra::
  Got:      cassandra:cassandra:cassandra:cassandra:
(B
55 /home/prow/go/src/k8s.io/kubernetes/hack/lib/test.sh
(B
discovery.sh:92: Successful get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(Bpod "cassandra-p4zbj" deleted
I0214 20:15:14.179787   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711313-28403", Name:"cassandra", UID:"85d22fae-7d79-40ff-ace0-c6b4ccdbb871", APIVersion:"v1", ResourceVersion:"3116", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-vjldl
I0214 20:15:14.188050   55204 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581711313-28403", Name:"cassandra", UID:"85d22fae-7d79-40ff-ace0-c6b4ccdbb871", APIVersion:"v1", ResourceVersion:"3116", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-6hmc5
pod "cassandra-w42hb" deleted
E0214 20:15:14.196896   55204 replica_set.go:535] sync "namespace-1581711313-28403/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1581711313-28403/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 85d22fae-7d79-40ff-ace0-c6b4ccdbb871, UID in object meta: 
replicationcontroller "cassandra" deleted
E0214 20:15:14.200008   55204 replica_set.go:535] sync "namespace-1581711313-28403/cassandra" failed with replicationcontrollers "cassandra" not found
service "cassandra" deleted
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
... skipping 114 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_swagger_tests
+++ [0214 20:15:15] Testing swagger
+++ exit code: 0
Recording: run_kubectl_sort_by_tests
Running command: run_kubectl_sort_by_tests
E0214 20:15:15.297258   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource

+++ Running case: test-cmd.run_kubectl_sort_by_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_sort_by_tests
+++ [0214 20:15:15] Testing kubectl --sort-by
get.sh:256: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 96 lines ...
get.sh:342: Successful get namespaces {{range.items}}{{if eq .metadata.name \"default\"}}{{.metadata.name}}:{{end}}{{end}}: default:
(Bget.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
get.sh:350: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BNAMESPACE                    NAME        READY   STATUS    RESTARTS   AGE
namespace-1581711313-28403   valid-pod   0/1     Pending   0          0s
E0214 20:15:18.819134   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/all-ns-test-1 created
serviceaccount/test created
namespace/all-ns-test-2 created
serviceaccount/test created
Successful
message:NAMESPACE                    NAME      SECRETS   AGE
... skipping 118 lines ...
namespace-1581711306-27885   default   0         13s
namespace-1581711313-28403   default   0         6s
some-other-random            default   0         7s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
namespace "all-ns-test-2" deleted
E0214 20:15:28.156252   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0214 20:15:29.470419   55204 namespace_controller.go:185] Namespace has been deleted all-ns-test-1
get.sh:376: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:380: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:384: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:
... skipping 565 lines ...
message:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:134: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:139: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
message:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
Successful
message:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
message:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 14 lines ...
+++ [0214 20:15:43] Testing kubectl plugins
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
message:I am plugin foo
has:plugin foo
Successful
message:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 10 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0214 20:15:44] Testing impersonation
Successful
message:error: requesting groups or user-extra for  without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
E0214 20:15:44.975592   55204 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:74: Successful get csr/foo {{len .spec.groups}}: 3
(Bauthorization.sh:75: Successful get csr/foo {{range .spec.groups}}{{.}} {{end}}: group2 group1 ,,,chameleon 
... skipping 70 lines ...
I0214 20:15:48.493047   51760 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 20:15:48.493057   51760 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 20:15:48.493113   51760 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 20:15:48.493143   51760 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 20:15:48.493154   51760 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 20:15:48.493166   51760 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
E0214 20:15:48.493159   51760 controller.go:184] rpc error: code = Unavailable desc = transport is closing
I0214 20:15:48.493201   51760 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
junit report dir: /logs/artifacts
+++ [0214 20:15:48] Clean up complete
+ make test-integration
+++ [0214 20:15:53] Checking etcd is on PATH
/home/prow/go/src/k8s.io/kubernetes/third_party/etcd/etcd
... skipping 312 lines ...
    synthetic_master_test.go:722: UPDATE_NODE_APISERVER is not set

=== SKIP: test/integration/scheduler_perf TestSchedule100Node3KPods (0.00s)
    scheduler_test.go:73: Skipping because we want to run short tests


=== Failed
=== FAIL: test/integration/scheduler TestPostBindPlugin (4.17s)
W0214 20:22:53.861458  112516 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0214 20:22:53.861486  112516 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0214 20:22:53.861498  112516 master.go:314] Node port range unspecified. Defaulting to 30000-32767.
I0214 20:22:53.861510  112516 master.go:270] Using reconciler: 
I0214 20:22:53.861648  112516 config.go:625] Not requested to run hook priority-and-fairness-config-consumer
I0214 20:22:53.863377  112516 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
... skipping 475 lines ...
W0214 20:22:54.127344  112516 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0214 20:22:54.128023  112516 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.129027  112516 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.129718  112516 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.130672  112516 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.131542  112516 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ffabaccd-32ca-4835-9798-cec32e422d4d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0214 20:22:54.135833  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
W0214 20:22:54.135842  112516 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 20:22:54.135861  112516 healthz.go:186] healthz check poststarthook/bootstrap-controller failed: not finished
I0214 20:22:54.135871  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.135881  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.135890  112516 healthz.go:186] healthz check poststarthook/start-cluster-authentication-info-controller failed: not finished
I0214 20:22:54.135897  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/start-cluster-authentication-info-controller failed: reason withheld
healthz check failed
I0214 20:22:54.135939  112516 httplog.go:90] verb="GET" URI="/healthz" latency=292.459µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34622": 
I0214 20:22:54.135948  112516 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0214 20:22:54.135965  112516 shared_informer.go:206] Waiting for caches to sync for cluster_authentication_trust_controller
I0214 20:22:54.136220  112516 reflector.go:175] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0214 20:22:54.136235  112516 reflector.go:211] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0214 20:22:54.136951  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0" latency=472.474µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34622": 
I0214 20:22:54.137615  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency=1.888328ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.138050  112516 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=29283 labels= fields= timeout=6m38s
I0214 20:22:54.139901  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=976.337µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.145397  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=1.152961ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.147757  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.147782  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.147794  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.147804  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.147842  112516 httplog.go:90] verb="GET" URI="/healthz" latency=212.799µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.150005  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=2.010753ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34632": 
I0214 20:22:54.150947  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=879.891µs resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
I0214 20:22:54.151825  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.42466ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34632": 
I0214 20:22:54.157635  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=5.408108ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34632": 
I0214 20:22:54.157977  112516 httplog.go:90] verb="GET" URI="/api/v1/services" latency=5.979266ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34624": 
... skipping 4 lines ...
I0214 20:22:54.166095  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.817829ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.169569  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=2.054706ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.173365  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=3.422825ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.179925  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency=1.535619ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.236137  112516 shared_informer.go:236] caches populated
I0214 20:22:54.236173  112516 shared_informer.go:213] Caches are synced for cluster_authentication_trust_controller 
I0214 20:22:54.236707  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.236742  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.236755  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.236764  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.236818  112516 httplog.go:90] verb="GET" URI="/healthz" latency=267.652µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.248649  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.248698  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.248720  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.248736  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.248808  112516 httplog.go:90] verb="GET" URI="/healthz" latency=338.53µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.336855  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.336893  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.336906  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.336915  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.336965  112516 httplog.go:90] verb="GET" URI="/healthz" latency=288.001µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.357318  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.357351  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.357363  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.357372  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.357427  112516 httplog.go:90] verb="GET" URI="/healthz" latency=268.128µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.436769  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.436806  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.436818  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.436827  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.436884  112516 httplog.go:90] verb="GET" URI="/healthz" latency=249.84µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.448512  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.448551  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.448580  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.448595  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.448641  112516 httplog.go:90] verb="GET" URI="/healthz" latency=262.623µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.536780  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.536814  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.536835  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.536844  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.536894  112516 httplog.go:90] verb="GET" URI="/healthz" latency=331.06µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.548551  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.548641  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.548654  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.548665  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.548721  112516 httplog.go:90] verb="GET" URI="/healthz" latency=314.156µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.636825  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.636867  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.636878  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.636887  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.636956  112516 httplog.go:90] verb="GET" URI="/healthz" latency=292.693µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.648528  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.648949  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.648975  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.648985  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.649077  112516 httplog.go:90] verb="GET" URI="/healthz" latency=684.016µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.736768  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.736804  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.736816  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.736825  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.736872  112516 httplog.go:90] verb="GET" URI="/healthz" latency=273.846µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.748447  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.748495  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.748507  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.748515  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.748560  112516 httplog.go:90] verb="GET" URI="/healthz" latency=233.531µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.836727  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.836764  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.836777  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.836796  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.836841  112516 httplog.go:90] verb="GET" URI="/healthz" latency=284.06µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.851737  112516 healthz.go:186] healthz check etcd failed: etcd client connection not yet established
I0214 20:22:54.851771  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.851783  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.851797  112516 healthz.go:200] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.851854  112516 httplog.go:90] verb="GET" URI="/healthz" latency=275.883µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:54.863554  112516 client.go:361] parsed scheme: "endpoint"
I0214 20:22:54.863641  112516 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 20:22:54.937751  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.937780  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.937790  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.937863  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.279971ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:54.950531  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:54.950559  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:54.950569  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:54.950644  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.187268ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.037711  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.037747  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:55.037757  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.037845  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.270116ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.049754  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.049781  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:55.049793  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.049869  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.419932ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.137186  112516 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical" latency=1.443241ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.137599  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.817505ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34636": 
I0214 20:22:55.138135  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.138160  112516 healthz.go:186] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0214 20:22:55.138171  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.138232  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.520823ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:55.139877  112516 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=1.818883ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.140046  112516 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0214 20:22:55.140391  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.989272ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34636": 
I0214 20:22:55.141327  112516 httplog.go:90] verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical" latency=1.060064ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.142481  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency=1.3201ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34636": 
I0214 20:22:55.143596  112516 httplog.go:90] verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency=1.84435ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.143791  112516 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0214 20:22:55.143814  112516 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0214 20:22:55.144282  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency=1.393369ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34636": 
I0214 20:22:55.145484  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency=775.643µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.146676  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency=764.496µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.147917  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency=749.818µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.149096  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.149120  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.149184  112516 httplog.go:90] verb="GET" URI="/healthz" latency=807.569µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.149468  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency=1.065318ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.151245  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=1.396993ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.152321  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin" latency=754.811µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.154413  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.608767ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.154811  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/cluster-admin
... skipping 34 lines ...
I0214 20:22:55.229291  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.677105ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.229553  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0214 20:22:55.230760  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin" latency=866.66µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.233908  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.686777ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.234158  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0214 20:22:55.236252  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper" latency=1.167202ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.237714  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.237744  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.237789  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.227428ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.239125  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.970651ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.239435  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0214 20:22:55.241565  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator" latency=1.949965ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.244844  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.850252ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.245094  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0214 20:22:55.246364  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator" latency=889.029µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.249186  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.903746ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.249382  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0214 20:22:55.249993  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.250014  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.250053  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.003877ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.251791  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager" latency=2.24246ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.255472  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.965408ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.255915  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0214 20:22:55.257944  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns" latency=1.243088ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.264371  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=5.443572ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
... skipping 38 lines ...
I0214 20:22:55.331872  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.409977ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.332138  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0214 20:22:55.333461  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller" latency=1.092811ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.335828  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.802507ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.336052  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0214 20:22:55.337231  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller" latency=830.108µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.337982  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.338142  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.338335  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.682029ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.339291  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.665793ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.339508  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0214 20:22:55.340625  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector" latency=964.445µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.342592  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.591396ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.342799  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0214 20:22:55.344933  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler" latency=1.994768ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.347268  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.811658ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.347513  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0214 20:22:55.350183  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.350217  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.350282  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.484676ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.350342  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller" latency=1.253063ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.352733  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.883749ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.352944  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0214 20:22:55.353877  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller" latency=759.679µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.355747  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.424316ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
... skipping 38 lines ...
I0214 20:22:55.422229  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=1.98623ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.422481  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0214 20:22:55.423783  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller" latency=1.058985ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.426531  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency=2.198006ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.426950  112516 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0214 20:22:55.428007  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin" latency=852.956µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.437692  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.437717  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.437765  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.322286ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.438500  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.425059ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.438711  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0214 20:22:55.449433  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.449468  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.449534  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.145198ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.457275  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency=1.32282ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.479065  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.773107ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.479455  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0214 20:22:55.499800  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user" latency=1.33261ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.518515  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.525025ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.519369  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0214 20:22:55.537433  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" latency=1.489615ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.538209  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.538234  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.538286  112516 httplog.go:90] verb="GET" URI="/healthz" latency=906.36µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.549359  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.549395  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.549460  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.082222ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.558220  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.292022ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.558489  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0214 20:22:55.577393  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier" latency=1.340847ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.598102  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.081858ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.598359  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0214 20:22:55.617335  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager" latency=1.337338ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.637516  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.637552  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.637611  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.181582ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:55.638238  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.221343ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.638893  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0214 20:22:55.649895  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.649934  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.650001  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.613279ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.662005  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns" latency=1.292002ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.678159  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.137145ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.678415  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0214 20:22:55.724319  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler" latency=4.217975ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.732737  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=7.859075ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.733028  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0214 20:22:55.737243  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler" latency=1.229435ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.739935  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.739959  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.740006  112516 httplog.go:90] verb="GET" URI="/healthz" latency=982.946µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:55.749332  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.749355  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.749408  112516 httplog.go:90] verb="GET" URI="/healthz" latency=966.515µs resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.758140  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.102259ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.758382  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0214 20:22:55.777985  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node" latency=1.635199ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.798264  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.283589ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.798538  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0214 20:22:55.817372  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller" latency=1.363058ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.841300  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.841345  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.841409  112516 httplog.go:90] verb="GET" URI="/healthz" latency=4.914968ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:55.842106  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=6.062214ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.842476  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0214 20:22:55.849397  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.849426  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.849495  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.086135ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.857150  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller" latency=1.192736ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.878251  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.282619ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.878502  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0214 20:22:55.897223  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller" latency=1.274ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.918494  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.432481ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.918793  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0214 20:22:55.937306  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller" latency=1.29368ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:55.937633  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.937664  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.937730  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.253013ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:55.949552  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:55.949580  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:55.949635  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.123379ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.957948  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.07958ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.958184  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0214 20:22:55.977437  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller" latency=1.4439ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.998176  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.204729ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:55.998569  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0214 20:22:56.017711  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller" latency=1.436462ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.037555  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.037594  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.037705  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.167655ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:56.038729  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.747438ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.038932  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0214 20:22:56.049978  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.050004  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.050064  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.059464ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.058002  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller" latency=1.102316ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.077921  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.029178ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.078181  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0214 20:22:56.100256  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller" latency=1.222783ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.118917  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.926756ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.119172  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0214 20:22:56.138057  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.138090  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.138138  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.455ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.138595  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller" latency=907.094µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.149758  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.149784  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.149855  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.460947ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.157817  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.970584ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.158084  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0214 20:22:56.177523  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector" latency=1.34311ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.198404  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.571142ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.198781  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0214 20:22:56.217383  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler" latency=1.389427ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.239083  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.239118  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.239202  112516 httplog.go:90] verb="GET" URI="/healthz" latency=2.751467ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.239202  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.258102ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.239451  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0214 20:22:56.249633  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.249663  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.249727  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.182698ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.257308  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller" latency=1.256993ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.278248  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.082166ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.278536  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0214 20:22:56.298400  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller" latency=1.307394ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.318421  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.393219ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.318704  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0214 20:22:56.337418  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller" latency=1.390131ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.337539  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.337559  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.337607  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.119972ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.349548  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.349579  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.349646  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.212546ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.358177  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.247681ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.358425  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0214 20:22:56.385362  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder" latency=9.338928ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.398443  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.418116ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.398698  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0214 20:22:56.422944  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector" latency=6.974037ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.438397  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.393484ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.438711  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0214 20:22:56.513790  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller" latency=57.450187ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35046": 
I0214 20:22:56.514395  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.514428  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.514503  112516 httplog.go:90] verb="GET" URI="/healthz" latency=66.158607ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.514601  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.514622  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.514661  112516 httplog.go:90] verb="GET" URI="/healthz" latency=77.703806ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:56.518041  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.058487ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35046": 
I0214 20:22:56.518315  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0214 20:22:56.519614  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller" latency=875.8µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.522222  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.16227ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.522447  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0214 20:22:56.537061  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller" latency=1.145238ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.537418  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.537441  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.537488  112516 httplog.go:90] verb="GET" URI="/healthz" latency=880.009µs resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34638": 
I0214 20:22:56.553054  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.553098  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.553183  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.125592ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.558026  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.022756ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.558251  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0214 20:22:56.577392  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller" latency=1.460517ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.598145  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.190394ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.598433  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0214 20:22:56.617168  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller" latency=1.229385ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.638132  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.161306ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.638250  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.638274  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.638324  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.432786ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.638367  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0214 20:22:56.649383  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.649410  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.649477  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.065606ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.657170  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller" latency=1.256126ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.677686  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.748983ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.677941  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0214 20:22:56.698353  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller" latency=2.43817ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.719708  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=3.747959ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:56.720181  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0214 20:22:56.739463  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller" latency=2.241957ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.739684  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.739705  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.739749  112516 httplog.go:90] verb="GET" URI="/healthz" latency=2.084568ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.749513  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.749546  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.749598  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.222434ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.758145  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=2.130451ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.758362  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0214 20:22:56.777191  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller" latency=1.229308ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.798158  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.955421ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.798408  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0214 20:22:56.817375  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller" latency=1.388276ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.858484  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.858528  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.858613  112516 httplog.go:90] verb="GET" URI="/healthz" latency=21.732143ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:56.859407  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=23.485255ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.859654  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0214 20:22:56.869415  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller" latency=9.510106ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34638": 
I0214 20:22:56.869617  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:56.869644  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:56.869688  112516 httplog.go:90] verb="GET" URI="/healthz" latency=21.013462ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35090": 
I0214 20:22:56.877877  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency=1.88931ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35090": 
I0214 20:22:56.878086  112516 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0214 20:22:56.932879  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader" latency=36.884496ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35090": 
I0214 20:22:57.036974  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.036989  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.037010  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.037016  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.037071  112516 httplog.go:90] verb="GET" URI="/healthz" latency=100.549261ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.037071  112516 httplog.go:90] verb="GET" URI="/healthz" latency=88.196012ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.037152  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=103.66847ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35090": 
I0214 20:22:57.039517  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.831479ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.039721  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0214 20:22:57.041104  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer" latency=1.156577ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.042931  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.281971ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.045496  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.030541ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.045783  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0214 20:22:57.046864  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider" latency=856.384µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.048552  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.024588ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.049307  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.049333  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.049394  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.037284ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:34786": 
I0214 20:22:57.051308  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.01849ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.051498  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0214 20:22:57.052668  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner" latency=824.361µs resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.054295  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.134032ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.056396  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.657033ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
... skipping 3 lines ...
I0214 20:22:57.078260  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=2.317073ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.078599  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0214 20:22:57.097157  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler" latency=1.24546ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.098866  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.21798ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.117949  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency=1.987171ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.118390  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0214 20:22:57.140072  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.140116  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.140193  112516 httplog.go:90] verb="GET" URI="/healthz" latency=3.66232ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.140651  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer" latency=4.694025ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.142457  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.324122ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.149733  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.149763  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.149829  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.137283ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.157934  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles" latency=2.070177ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.158246  112516 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0214 20:22:57.177372  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader" latency=1.247677ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.179246  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.356702ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.215023  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.961979ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.215293  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0214 20:22:57.217105  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager" latency=1.119304ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.218685  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.199992ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.238227  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.238269  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.238335  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.84568ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.238364  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.388675ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.238602  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0214 20:22:57.249595  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.249626  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.249678  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.20561ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.257347  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler" latency=1.355908ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.259076  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.266084ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.277787  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=1.800163ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.278225  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0214 20:22:57.297357  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer" latency=1.365536ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.299148  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.196101ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.317892  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=1.922049ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.318146  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0214 20:22:57.337236  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider" latency=1.227944ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.337500  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.337567  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.338160  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.111378ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.338865  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.16766ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.349347  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.349376  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.349470  112516 httplog.go:90] verb="GET" URI="/healthz" latency=1.075013ms resp=0 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.358120  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.191006ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.360330  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0214 20:22:57.377613  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner" latency=1.627457ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.379943  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system" latency=1.669344ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.398112  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency=2.146424ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.398357  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0214 20:22:57.417519  112516 httplog.go:90] verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer" latency=1.550483ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.419591  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public" latency=1.476182ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.454375  112516 healthz.go:186] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0214 20:22:57.454404  112516 healthz.go:200] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0214 20:22:57.454511  112516 httplog.go:90] verb="GET" URI="/healthz" latency=8.882863ms resp=0 UserAgent="Go-http-client/1.1" srcIP="127.0.0.1:34786": 
I0214 20:22:57.454835  112516 httplog.go:90] verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings" latency=10.05463ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.455051  112516 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0214 20:22:57.456316  112516 httplog.go:90] verb="GET" URI="/healthz" latency=3.584356ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35154": 
I0214 20:22:57.460412  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/default" latency=1.138577ms resp=404 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
I0214 20:22:57.463042  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces" latency=1.954677ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35104": 
... skipping 82 lines ...
I0214 20:22:57.989106  112516 httplog.go:90] verb="POST" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods" latency=3.211ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:57.989519  112516 eventhandlers.go:172] add event for unscheduled pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:57.989557  112516 scheduling_queue.go:821] About to try and schedule pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:57.989567  112516 scheduler.go:564] Attempting to schedule pod: postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
I0214 20:22:57.989791  112516 scheduler_binder.go:279] AssumePodVolumes for pod "postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod", node "test-node-1"
I0214 20:22:57.989807  112516 scheduler_binder.go:289] AssumePodVolumes for pod "postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod", node "test-node-1": all PVCs bound and nothing to do
E0214 20:22:57.989868  112516 framework.go:615] error while running "prebind-plugin" prebind plugin for pod "test-pod": injecting failure for pod test-pod
E0214 20:22:57.989886  112516 factory.go:415] Error scheduling postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod: error while running "prebind-plugin" prebind plugin for pod "test-pod": injecting failure for pod test-pod; retrying
I0214 20:22:57.989909  112516 scheduler.go:743] Updating pod condition for postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0214 20:22:57.994673  112516 httplog.go:90] verb="POST" URI="/apis/events.k8s.io/v1beta1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/events" latency=2.612198ms resp=201 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35232": 
I0214 20:22:57.994741  112516 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod/status" latency=3.636497ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:57.994938  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod" latency=4.259166ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35194": 
I0214 20:22:58.001073  112516 httplog.go:90] verb="GET" URI="/api/v1/namespaces/postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/pods/test-pod" latency=1.403859ms resp=200 UserAgent="scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:35216": 
I0214 20:22:58.005203  112516 scheduling_queue.go:821] About to try and schedule pod postbind-pluginaa6222c1-52e3-4c54-b839-3b23ac1c08d4/test-pod
... skipping 30 lines ...
    framework_test.go:1084: test #0: Expected the postbind plugin to be called, was called 0 times.
    framework_test.go:1077: test #1: Didn't expected the postbind plugin to be called 1 times.


DONE 2358 tests, 4 skipped, 1 failure in 5.616s
+++ [0214 20:27:29] Saved JUnit XML test report to /logs/artifacts/junit_20200214-201559.xml
make[1]: *** [Makefile:185: test] Error 1
!!! [0214 20:27:29] Call tree:
!!! [0214 20:27:29]  1: hack/make-rules/test-integration.sh:97 runTests(...)
+++ [0214 20:27:29] Cleaning up etcd
+++ [0214 20:27:29] Integration test cleanup complete
make: *** [Makefile:204: test-integration] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up binfmt_misc ...
================================================================================
... skipping 2 lines ...