This job view page is being replaced by Spyglass soon. Check out the new job view.
PRRainbowMango: Migrate prometheus bucket functionality for metrics stability framework
ResultFAILURE
Tests 1 failed / 2861 succeeded
Started2019-09-16 22:43
Elapsed33m13s
Revision
Buildergke-prow-ssd-pool-1a225945-khv0
Refs master:1bebaea4
82745:65a57d86
pod3fe55561-d8d3-11e9-8d3e-e6dd98504fa2
infra-commit40bbf397c
pod3fe55561-d8d3-11e9-8d3e-e6dd98504fa2
repok8s.io/kubernetes
repo-commit193bdcc7c9d7e35ff2b80105e3a74f373b2c4522
repos{u'k8s.io/kubernetes': u'master:1bebaea417fe473aac7423aab0cfffab029d6870,82745:65a57d863462d107bfc9472418a58046c3cd5550'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0916 23:12:10.182406  108988 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0916 23:12:10.183986  108988 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0916 23:12:10.184098  108988 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0916 23:12:10.184174  108988 master.go:259] Using reconciler: 
I0916 23:12:10.188879  108988 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.189270  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.189313  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.190742  108988 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0916 23:12:10.190792  108988 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.191459  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.191490  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.191625  108988 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0916 23:12:10.193023  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.193741  108988 store.go:1342] Monitoring events count at <storage-prefix>//events
I0916 23:12:10.193789  108988 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.193829  108988 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0916 23:12:10.193997  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.194022  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.195379  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.196461  108988 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0916 23:12:10.196516  108988 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.196669  108988 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0916 23:12:10.196765  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.196808  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.197815  108988 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0916 23:12:10.198127  108988 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.198237  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.198328  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.198355  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.198438  108988 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0916 23:12:10.199835  108988 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0916 23:12:10.199938  108988 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0916 23:12:10.200116  108988 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.200273  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.200296  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.201232  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.201961  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.204028  108988 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0916 23:12:10.204144  108988 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0916 23:12:10.204462  108988 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.204636  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.204736  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.206128  108988 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0916 23:12:10.206390  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.206555  108988 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0916 23:12:10.206543  108988 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.206832  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.206878  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.207610  108988 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0916 23:12:10.207705  108988 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0916 23:12:10.207892  108988 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.208212  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.208334  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.208679  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.209081  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.210092  108988 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0916 23:12:10.210141  108988 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0916 23:12:10.210321  108988 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.210498  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.210537  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.211927  108988 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0916 23:12:10.211998  108988 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0916 23:12:10.212292  108988 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.212478  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.212610  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.213953  108988 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0916 23:12:10.214031  108988 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0916 23:12:10.214157  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.214324  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.214348  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.214800  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.215442  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.216300  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.217311  108988 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0916 23:12:10.217473  108988 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0916 23:12:10.217725  108988 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.218170  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.218208  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.218957  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.219897  108988 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0916 23:12:10.220087  108988 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.220199  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.220217  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.220301  108988 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0916 23:12:10.221144  108988 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0916 23:12:10.221185  108988 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.221404  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.221433  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.221518  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.221577  108988 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0916 23:12:10.222523  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.222554  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.223449  108988 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.223689  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.223781  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.224116  108988 watch_cache.go:405] Replace watchCache (rev: 30612) 
I0916 23:12:10.224942  108988 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0916 23:12:10.224973  108988 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0916 23:12:10.225081  108988 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0916 23:12:10.225571  108988 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.225803  108988 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.226505  108988 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.227336  108988 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.228105  108988 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.229238  108988 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.229959  108988 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.230189  108988 watch_cache.go:405] Replace watchCache (rev: 30613) 
I0916 23:12:10.230215  108988 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.230580  108988 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.231364  108988 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.232143  108988 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.232592  108988 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.234108  108988 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.234875  108988 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.236467  108988 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.237313  108988 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.239224  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.239486  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.239663  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.239890  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.240178  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.240579  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.240916  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.242182  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.242668  108988 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.244023  108988 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.245134  108988 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.245641  108988 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.246043  108988 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.246799  108988 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.251051  108988 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.253215  108988 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.254307  108988 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.255314  108988 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.256252  108988 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.256547  108988 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.256670  108988 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0916 23:12:10.256695  108988 master.go:461] Enabling API group "authentication.k8s.io".
I0916 23:12:10.256721  108988 master.go:461] Enabling API group "authorization.k8s.io".
I0916 23:12:10.257044  108988 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.257483  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.257522  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.259325  108988 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 23:12:10.259424  108988 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 23:12:10.261624  108988 watch_cache.go:405] Replace watchCache (rev: 30614) 
I0916 23:12:10.262836  108988 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.265670  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.265706  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.267288  108988 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 23:12:10.273120  108988 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 23:12:10.274515  108988 watch_cache.go:405] Replace watchCache (rev: 30616) 
I0916 23:12:10.279591  108988 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.280673  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.280816  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.286947  108988 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 23:12:10.286984  108988 master.go:461] Enabling API group "autoscaling".
I0916 23:12:10.287053  108988 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 23:12:10.287257  108988 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.287518  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.287555  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.288312  108988 watch_cache.go:405] Replace watchCache (rev: 30616) 
I0916 23:12:10.289502  108988 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0916 23:12:10.289909  108988 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.290245  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.290384  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.289982  108988 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0916 23:12:10.292047  108988 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0916 23:12:10.292188  108988 master.go:461] Enabling API group "batch".
I0916 23:12:10.292524  108988 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0916 23:12:10.292538  108988 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.292689  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.292708  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.294704  108988 watch_cache.go:405] Replace watchCache (rev: 30616) 
I0916 23:12:10.297073  108988 watch_cache.go:405] Replace watchCache (rev: 30616) 
I0916 23:12:10.297105  108988 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0916 23:12:10.297077  108988 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0916 23:12:10.297154  108988 master.go:461] Enabling API group "certificates.k8s.io".
I0916 23:12:10.297522  108988 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.297681  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.297708  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.298793  108988 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0916 23:12:10.298915  108988 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0916 23:12:10.299453  108988 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.299811  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.299961  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.300087  108988 watch_cache.go:405] Replace watchCache (rev: 30616) 
I0916 23:12:10.301099  108988 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0916 23:12:10.301121  108988 master.go:461] Enabling API group "coordination.k8s.io".
I0916 23:12:10.301137  108988 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0916 23:12:10.301218  108988 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0916 23:12:10.301328  108988 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.301460  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.301480  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.302178  108988 watch_cache.go:405] Replace watchCache (rev: 30616) 
I0916 23:12:10.302465  108988 watch_cache.go:405] Replace watchCache (rev: 30616) 
I0916 23:12:10.302967  108988 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0916 23:12:10.303026  108988 master.go:461] Enabling API group "extensions".
I0916 23:12:10.303215  108988 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.303320  108988 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0916 23:12:10.303405  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.303423  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.305655  108988 watch_cache.go:405] Replace watchCache (rev: 30617) 
I0916 23:12:10.307171  108988 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0916 23:12:10.307546  108988 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.307968  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.308174  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.307987  108988 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0916 23:12:10.311669  108988 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0916 23:12:10.311695  108988 master.go:461] Enabling API group "networking.k8s.io".
I0916 23:12:10.311742  108988 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.311909  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.311932  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.312074  108988 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0916 23:12:10.313259  108988 watch_cache.go:405] Replace watchCache (rev: 30617) 
I0916 23:12:10.319282  108988 watch_cache.go:405] Replace watchCache (rev: 30618) 
I0916 23:12:10.319905  108988 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0916 23:12:10.320099  108988 master.go:461] Enabling API group "node.k8s.io".
I0916 23:12:10.320589  108988 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.319967  108988 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0916 23:12:10.321948  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.322070  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.323256  108988 watch_cache.go:405] Replace watchCache (rev: 30618) 
I0916 23:12:10.333869  108988 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0916 23:12:10.334120  108988 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0916 23:12:10.339443  108988 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.339728  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.340424  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.340661  108988 watch_cache.go:405] Replace watchCache (rev: 30619) 
I0916 23:12:10.345952  108988 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0916 23:12:10.346156  108988 master.go:461] Enabling API group "policy".
I0916 23:12:10.346221  108988 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.346649  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.346709  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.346888  108988 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0916 23:12:10.348054  108988 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0916 23:12:10.348545  108988 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.348965  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.349472  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.349203  108988 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0916 23:12:10.351052  108988 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0916 23:12:10.351097  108988 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.351425  108988 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0916 23:12:10.351452  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.351476  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.352836  108988 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0916 23:12:10.353092  108988 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.353114  108988 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0916 23:12:10.353392  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.353420  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.354561  108988 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0916 23:12:10.354731  108988 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.354954  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.355218  108988 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0916 23:12:10.355041  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.355495  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.356821  108988 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0916 23:12:10.356920  108988 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0916 23:12:10.357069  108988 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.357248  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.357272  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.357381  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.357482  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.357723  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.359083  108988 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0916 23:12:10.359120  108988 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.359265  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.359291  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.359366  108988 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0916 23:12:10.360721  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.361020  108988 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0916 23:12:10.361126  108988 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0916 23:12:10.361309  108988 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.361510  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.361537  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.362470  108988 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0916 23:12:10.362507  108988 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0916 23:12:10.362567  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.362591  108988 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0916 23:12:10.363986  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.365258  108988 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.365402  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.365424  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.366413  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.367550  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.367945  108988 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0916 23:12:10.368035  108988 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0916 23:12:10.368364  108988 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.368524  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.368619  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.369740  108988 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0916 23:12:10.369868  108988 master.go:461] Enabling API group "scheduling.k8s.io".
I0916 23:12:10.369969  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.370080  108988 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0916 23:12:10.370110  108988 master.go:450] Skipping disabled API group "settings.k8s.io".
I0916 23:12:10.370444  108988 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.370647  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.370670  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.371361  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.371563  108988 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0916 23:12:10.371760  108988 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.372322  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.372348  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.373356  108988 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0916 23:12:10.373398  108988 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.373418  108988 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0916 23:12:10.373607  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.373627  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.374512  108988 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0916 23:12:10.374536  108988 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0916 23:12:10.374515  108988 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0916 23:12:10.374585  108988 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.374787  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.374805  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.375190  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.375369  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.375594  108988 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0916 23:12:10.375722  108988 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0916 23:12:10.375822  108988 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.376008  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.376029  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.376456  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.377836  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.378039  108988 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0916 23:12:10.378187  108988 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0916 23:12:10.378220  108988 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.378329  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.378349  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.379185  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.380681  108988 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0916 23:12:10.380714  108988 master.go:461] Enabling API group "storage.k8s.io".
I0916 23:12:10.380887  108988 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0916 23:12:10.381084  108988 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.381251  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.381270  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.381789  108988 watch_cache.go:405] Replace watchCache (rev: 30621) 
I0916 23:12:10.387138  108988 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0916 23:12:10.387231  108988 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0916 23:12:10.389764  108988 watch_cache.go:405] Replace watchCache (rev: 30622) 
I0916 23:12:10.390357  108988 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.390564  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.390588  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.392250  108988 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0916 23:12:10.392422  108988 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0916 23:12:10.392517  108988 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.392704  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.392737  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.393832  108988 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0916 23:12:10.394039  108988 watch_cache.go:405] Replace watchCache (rev: 30623) 
I0916 23:12:10.394083  108988 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.394217  108988 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0916 23:12:10.394271  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.394304  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.396134  108988 watch_cache.go:405] Replace watchCache (rev: 30623) 
I0916 23:12:10.396489  108988 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0916 23:12:10.396539  108988 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0916 23:12:10.398087  108988 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.398569  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.398727  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.398601  108988 watch_cache.go:405] Replace watchCache (rev: 30623) 
I0916 23:12:10.399831  108988 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0916 23:12:10.399920  108988 master.go:461] Enabling API group "apps".
I0916 23:12:10.399969  108988 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.400081  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.400097  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.400195  108988 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0916 23:12:10.401426  108988 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0916 23:12:10.401572  108988 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.401810  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.402011  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.402215  108988 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0916 23:12:10.402315  108988 watch_cache.go:405] Replace watchCache (rev: 30623) 
I0916 23:12:10.403411  108988 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0916 23:12:10.403800  108988 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.404103  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.404242  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.403584  108988 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0916 23:12:10.403725  108988 watch_cache.go:405] Replace watchCache (rev: 30623) 
I0916 23:12:10.405694  108988 watch_cache.go:405] Replace watchCache (rev: 30623) 
I0916 23:12:10.406096  108988 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0916 23:12:10.406528  108988 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.407080  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.407248  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.406278  108988 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0916 23:12:10.409879  108988 watch_cache.go:405] Replace watchCache (rev: 30623) 
I0916 23:12:10.411540  108988 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0916 23:12:10.411757  108988 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0916 23:12:10.412136  108988 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.412720  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:10.411701  108988 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0916 23:12:10.412958  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:10.413763  108988 watch_cache.go:405] Replace watchCache (rev: 30623) 
I0916 23:12:10.415124  108988 store.go:1342] Monitoring events count at <storage-prefix>//events
I0916 23:12:10.415259  108988 master.go:461] Enabling API group "events.k8s.io".
I0916 23:12:10.415372  108988 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0916 23:12:10.416277  108988 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.416690  108988 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.417222  108988 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.417463  108988 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.417815  108988 watch_cache.go:405] Replace watchCache (rev: 30623) 
I0916 23:12:10.418325  108988 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.419316  108988 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.419766  108988 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.420063  108988 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.420313  108988 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.420597  108988 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.422093  108988 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.422596  108988 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.424897  108988 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.425688  108988 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.427251  108988 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.428126  108988 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.430602  108988 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.431001  108988 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.432044  108988 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.432415  108988 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 23:12:10.432473  108988 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0916 23:12:10.433325  108988 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.433528  108988 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.433991  108988 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.435239  108988 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.436192  108988 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.437454  108988 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.437856  108988 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.438982  108988 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.440118  108988 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.440469  108988 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.441521  108988 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 23:12:10.441600  108988 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0916 23:12:10.442817  108988 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.443159  108988 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.443986  108988 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.445035  108988 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.445793  108988 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.446790  108988 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.447805  108988 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.448729  108988 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.449361  108988 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.450298  108988 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.451313  108988 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 23:12:10.451420  108988 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0916 23:12:10.452322  108988 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.453437  108988 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 23:12:10.453545  108988 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0916 23:12:10.454381  108988 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.455112  108988 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.455505  108988 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.456459  108988 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.457283  108988 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.458136  108988 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.458996  108988 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 23:12:10.459090  108988 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0916 23:12:10.460411  108988 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.461407  108988 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.461918  108988 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.463150  108988 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.463502  108988 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.463835  108988 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.464869  108988 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.465260  108988 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.465659  108988 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.466718  108988 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.467181  108988 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.467602  108988 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 23:12:10.467726  108988 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0916 23:12:10.467743  108988 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0916 23:12:10.468755  108988 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.469573  108988 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.470509  108988 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.471495  108988 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.472796  108988 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9af8dd2e-ff77-466a-a6c1-a54a3fc56fab", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 23:12:10.478433  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.478469  108988 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0916 23:12:10.478481  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.478492  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.478534  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.478548  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.478618  108988 httplog.go:90] GET /healthz: (335.726µs) 0 [Go-http-client/1.1 127.0.0.1:55650]
I0916 23:12:10.480207  108988 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.90266ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55652]
I0916 23:12:10.484193  108988 httplog.go:90] GET /api/v1/services: (1.994573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55652]
I0916 23:12:10.490150  108988 httplog.go:90] GET /api/v1/services: (1.852917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55652]
I0916 23:12:10.493139  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.493342  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.493438  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.493533  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.493625  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.493926  108988 httplog.go:90] GET /healthz: (830.446µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55650]
I0916 23:12:10.494210  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.203075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55652]
I0916 23:12:10.496223  108988 httplog.go:90] GET /api/v1/services: (1.672119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:10.498513  108988 httplog.go:90] POST /api/v1/namespaces: (2.61686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55652]
I0916 23:12:10.498653  108988 httplog.go:90] GET /api/v1/services: (1.926974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55650]
I0916 23:12:10.500461  108988 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.405983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:10.503140  108988 httplog.go:90] POST /api/v1/namespaces: (2.236074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:10.506818  108988 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (2.944925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:10.513099  108988 httplog.go:90] POST /api/v1/namespaces: (1.831277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:10.579827  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.579916  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.579937  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.579948  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.579957  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.579996  108988 httplog.go:90] GET /healthz: (385.267µs) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:10.595094  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.595138  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.595160  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.595171  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.595180  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.595277  108988 httplog.go:90] GET /healthz: (360.293µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:10.679710  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.681506  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.681592  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.681603  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.681612  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.681673  108988 httplog.go:90] GET /healthz: (2.194355ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:10.695187  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.695237  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.695249  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.695259  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.695275  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.695313  108988 httplog.go:90] GET /healthz: (404.412µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:10.779515  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.779557  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.779571  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.779581  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.779589  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.779634  108988 httplog.go:90] GET /healthz: (329.194µs) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:10.795032  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.795080  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.795093  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.795102  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.795111  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.795162  108988 httplog.go:90] GET /healthz: (337.499µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:10.879522  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.879567  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.879580  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.879590  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.879599  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.879633  108988 httplog.go:90] GET /healthz: (321.001µs) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:10.895050  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.895090  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.895104  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.895114  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.895122  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.895162  108988 httplog.go:90] GET /healthz: (346.594µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:10.979436  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.979475  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.979485  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.979492  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.979499  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.979545  108988 httplog.go:90] GET /healthz: (314.281µs) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:10.994936  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:10.994967  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:10.994977  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:10.994983  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:10.994989  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:10.995033  108988 httplog.go:90] GET /healthz: (286.479µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.079488  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:11.079532  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.079543  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:11.079550  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:11.079557  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:11.079594  108988 httplog.go:90] GET /healthz: (272.004µs) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:11.095003  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:11.095033  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.095042  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:11.095048  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:11.095058  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:11.095092  108988 httplog.go:90] GET /healthz: (272.85µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.179496  108988 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 23:12:11.179536  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.179549  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:11.179559  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:11.179567  108988 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:11.179634  108988 httplog.go:90] GET /healthz: (324.042µs) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:11.182534  108988 client.go:361] parsed scheme: "endpoint"
I0916 23:12:11.182644  108988 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 23:12:11.196206  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.196239  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:11.196250  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:11.196272  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:11.196314  108988 httplog.go:90] GET /healthz: (1.529079ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.280882  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.280920  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:11.280931  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:11.280940  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:11.280983  108988 httplog.go:90] GET /healthz: (1.703285ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:11.295883  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.295921  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:11.295933  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:11.295951  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:11.296002  108988 httplog.go:90] GET /healthz: (1.187204ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.380333  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.380371  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:11.380381  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:11.380390  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:11.380524  108988 httplog.go:90] GET /healthz: (1.176816ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:11.395955  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.395994  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:11.396007  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:11.396016  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:11.396077  108988 httplog.go:90] GET /healthz: (1.282193ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.483269  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.483311  108988 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 23:12:11.483322  108988 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 23:12:11.483335  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 23:12:11.483378  108988 httplog.go:90] GET /healthz: (4.081513ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:11.484291  108988 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (4.933321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.485715  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.475425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55714]
I0916 23:12:11.488903  108988 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.519341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.488903  108988 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.648151ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55714]
I0916 23:12:11.489188  108988 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0916 23:12:11.491121  108988 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.348084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.497782  108988 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (6.15325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.497942  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (15.118242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55712]
I0916 23:12:11.498869  108988 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (7.851537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.499962  108988 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0916 23:12:11.499982  108988 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0916 23:12:11.502096  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.502140  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.502205  108988 httplog.go:90] GET /healthz: (2.120918ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55716]
I0916 23:12:11.502206  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.621717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55712]
I0916 23:12:11.508996  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (6.238417ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.511254  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.447485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.512915  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (974.249µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.514631  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.330877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.515780  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (750.507µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.517042  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (931.067µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.518921  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.283394ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.520341  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (888.287µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.523651  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.819589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.524163  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0916 23:12:11.526315  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.783734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.530173  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.365776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.530408  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0916 23:12:11.532073  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.333739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.535577  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.033171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.535886  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0916 23:12:11.537385  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.225052ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.540243  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.229363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.540916  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0916 23:12:11.542161  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (924.076µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.548240  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.589563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.548905  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0916 23:12:11.550827  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.596727ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.554445  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.134891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.554673  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0916 23:12:11.555725  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (845.036µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.561211  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.653709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.563207  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0916 23:12:11.566961  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (3.342128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.571044  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.500563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.571287  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0916 23:12:11.572791  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (974.296µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.573261  108988 cacher.go:777] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I0916 23:12:11.573283  108988 cacher.go:777] cacher (*rbac.ClusterRole): 2 objects queued in incoming channel.
I0916 23:12:11.575485  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.065068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.575863  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0916 23:12:11.578153  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (2.014983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.580177  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.580310  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.582451  108988 httplog.go:90] GET /healthz: (3.303575ms) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:11.582254  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.459619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.582893  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0916 23:12:11.584954  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.736874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
E0916 23:12:11.585402  108988 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:45265/apis/events.k8s.io/v1beta1/namespaces/permit-plugincda76952-bee6-4dc3-b445-7b3eea618871/events: dial tcp 127.0.0.1:45265: connect: connection refused' (may retry after sleeping)
I0916 23:12:11.588509  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.94247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.588884  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0916 23:12:11.590714  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.528908ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.595691  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.595709  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.317704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.595714  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.595801  108988 httplog.go:90] GET /healthz: (1.046168ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.596148  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0916 23:12:11.597185  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (818.861µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.600795  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.184031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.601052  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0916 23:12:11.607495  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (6.252535ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.610465  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.344443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.610901  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0916 23:12:11.611922  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (837.066µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.613558  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.251855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.613926  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0916 23:12:11.615014  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (841.105µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.617016  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.464667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.617415  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0916 23:12:11.618454  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (754.514µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.620624  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.648892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.620981  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0916 23:12:11.622154  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (706.358µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.624429  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.845365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.624949  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0916 23:12:11.626563  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.296414ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.628442  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.330063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.628886  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0916 23:12:11.630073  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (753.872µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.631754  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.207568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.632339  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0916 23:12:11.633461  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (873.608µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.635411  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.600089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.635649  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0916 23:12:11.636484  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (725.638µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.638619  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.639013  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0916 23:12:11.640405  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.006491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.642111  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.374655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.642327  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0916 23:12:11.643482  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (874.57µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.646092  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.174974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.646449  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0916 23:12:11.647605  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (849.036µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.650366  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.74061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.650582  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0916 23:12:11.651637  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (813.315µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.654274  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.935016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.654525  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0916 23:12:11.655394  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (728.898µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.657371  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.703936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.657728  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0916 23:12:11.659048  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.0828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.661427  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.453445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.661737  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0916 23:12:11.662780  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (693.367µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.665747  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.411759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.666093  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0916 23:12:11.667175  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (856.361µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.669521  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.849635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.669746  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0916 23:12:11.671053  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.099152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.673562  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.040016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.673928  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0916 23:12:11.675034  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (745.366µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.677452  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.8546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.681628  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0916 23:12:11.682524  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.682701  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.682940  108988 httplog.go:90] GET /healthz: (3.707644ms) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:11.684503  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.342197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.688390  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.541986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.688787  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0916 23:12:11.690573  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.334932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.693370  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.12141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.693954  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0916 23:12:11.695444  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.205811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.696027  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.696353  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.696718  108988 httplog.go:90] GET /healthz: (1.822103ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.698768  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.42029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.699550  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0916 23:12:11.702375  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.260057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.705320  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.253931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.705606  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0916 23:12:11.707706  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (759.297µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.710589  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.238712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.711012  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0916 23:12:11.711907  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (709.342µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.713952  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.718447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.714231  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0916 23:12:11.715464  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (927.473µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.717458  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.534581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.717806  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0916 23:12:11.718942  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (884.365µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.721380  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.876438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.721765  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0916 23:12:11.723236  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.169016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.726892  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.374176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.727222  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0916 23:12:11.728114  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (701.42µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.730024  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.520095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.730349  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0916 23:12:11.731342  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (746.032µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.733347  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.537913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.733742  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0916 23:12:11.734802  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (731.294µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.736943  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.711672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.737301  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0916 23:12:11.738443  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (912.216µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.740281  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.45468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.740539  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0916 23:12:11.741575  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (757.335µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.754197  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.841518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.754644  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0916 23:12:11.756313  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.358049ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.759581  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.816409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.759930  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0916 23:12:11.761408  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.142787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.763769  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.71275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.764056  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0916 23:12:11.765433  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.155582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.770380  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.420672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.771131  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0916 23:12:11.773974  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (2.614139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.776076  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.515815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.776368  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0916 23:12:11.777439  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (811.092µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.780045  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.009458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.780205  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0916 23:12:11.780973  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.780999  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.781028  108988 httplog.go:90] GET /healthz: (967.799µs) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:11.781741  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.336193ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.783831  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.77452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.784180  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0916 23:12:11.785559  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.075949ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.787664  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.57645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.788076  108988 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0916 23:12:11.797387  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.797429  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.797542  108988 httplog.go:90] GET /healthz: (2.863815ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.802678  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.0659ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.825143  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.352285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.825452  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0916 23:12:11.843417  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.723955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.863905  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.215237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.864248  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0916 23:12:11.882888  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.882959  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.883093  108988 httplog.go:90] GET /healthz: (3.622132ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:11.883876  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.085408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.896003  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.896326  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.896573  108988 httplog.go:90] GET /healthz: (1.82056ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.905059  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.425871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.905447  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0916 23:12:11.926761  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.604739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.944682  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.851791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.945117  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0916 23:12:11.967272  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (2.001409ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:11.984201  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.984242  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.984296  108988 httplog.go:90] GET /healthz: (3.932604ms) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:11.985578  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.034969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:11.986202  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0916 23:12:11.996237  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:11.996265  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:11.996361  108988 httplog.go:90] GET /healthz: (1.406915ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.003882  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (2.190419ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.030530  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.776469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.030921  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0916 23:12:12.043570  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.47024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.065052  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.254935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.065384  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0916 23:12:12.081453  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.081494  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.081540  108988 httplog.go:90] GET /healthz: (1.564772ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:12.084622  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.674792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.096399  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.096437  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.096576  108988 httplog.go:90] GET /healthz: (1.753978ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.107669  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.977705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.111577  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0916 23:12:12.124137  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.371024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.144703  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.914282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.145062  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0916 23:12:12.181911  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.181948  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.182016  108988 httplog.go:90] GET /healthz: (2.281517ms) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:12.191312  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.788173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.194390  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.425724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.194661  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0916 23:12:12.195746  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.195805  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.195860  108988 httplog.go:90] GET /healthz: (1.168211ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.203853  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.679974ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.224424  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.728027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.224748  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0916 23:12:12.243398  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.645101ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.264481  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.706933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.264733  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0916 23:12:12.282983  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.283021  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.283073  108988 httplog.go:90] GET /healthz: (3.551179ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:12.284043  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.595489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.296217  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.296251  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.296301  108988 httplog.go:90] GET /healthz: (1.476542ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.304574  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.922903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.304877  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0916 23:12:12.323511  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.743745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.358456  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.212921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.358769  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0916 23:12:12.363385  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.746913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.381528  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.381575  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.381644  108988 httplog.go:90] GET /healthz: (1.453123ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:12.383988  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.368578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.384225  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0916 23:12:12.398418  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.398453  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.398507  108988 httplog.go:90] GET /healthz: (1.394788ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.402912  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.343338ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.429364  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.104626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.429901  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0916 23:12:12.443440  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.745418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.464930  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.231327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.465283  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0916 23:12:12.480480  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.480513  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.480553  108988 httplog.go:90] GET /healthz: (1.292315ms) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:12.482917  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.416763ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.499298  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.499363  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.499440  108988 httplog.go:90] GET /healthz: (1.333743ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.504292  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.726336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.504800  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0916 23:12:12.523356  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.694072ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.544742  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.026765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.545112  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0916 23:12:12.563485  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.738588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.580993  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.581033  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.581087  108988 httplog.go:90] GET /healthz: (1.737447ms) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:12.584508  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.926082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.584751  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0916 23:12:12.595745  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.595777  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.595813  108988 httplog.go:90] GET /healthz: (1.076526ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.603271  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.667488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.624257  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.614303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.624549  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0916 23:12:12.643452  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.756201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.664486  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.788953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.664836  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0916 23:12:12.690297  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.690345  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.690398  108988 httplog.go:90] GET /healthz: (1.485617ms) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:12.690703  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.1326ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.698398  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.698475  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.698571  108988 httplog.go:90] GET /healthz: (2.857054ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.711923  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (10.095363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.712599  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0916 23:12:12.724314  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (2.433779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.744868  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.078964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.745367  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0916 23:12:12.764202  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.673051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:12.782774  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.782813  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.782875  108988 httplog.go:90] GET /healthz: (2.485178ms) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:12.784576  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.860932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.784887  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0916 23:12:12.796629  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.796670  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.796715  108988 httplog.go:90] GET /healthz: (1.44734ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.807806  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.714614ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.825032  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.206198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.825312  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0916 23:12:12.845006  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.873143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.864925  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.195996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.865253  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0916 23:12:12.880836  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.880888  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.880965  108988 httplog.go:90] GET /healthz: (1.549059ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:12.884609  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (3.053639ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.897967  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.898003  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.898053  108988 httplog.go:90] GET /healthz: (1.468614ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.904271  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.640198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.904781  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0916 23:12:12.923384  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.725092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.944451  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.650337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.944736  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0916 23:12:12.964128  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.752774ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.980800  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.980865  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.980916  108988 httplog.go:90] GET /healthz: (1.471986ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:12.984278  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.639761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:12.984615  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0916 23:12:12.996223  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:12.996265  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:12.996316  108988 httplog.go:90] GET /healthz: (1.464218ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.003357  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.6367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.024954  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.214971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.025261  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0916 23:12:13.044133  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (2.276518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.065588  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.516838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.066038  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0916 23:12:13.080940  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.080976  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.081021  108988 httplog.go:90] GET /healthz: (1.518124ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:13.084190  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.035974ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.096320  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.096384  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.096440  108988 httplog.go:90] GET /healthz: (1.564557ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.105426  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.714982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.105985  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0916 23:12:13.123677  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.999813ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.145042  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.239016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.145450  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0916 23:12:13.166399  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.920717ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.180897  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.180938  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.180990  108988 httplog.go:90] GET /healthz: (1.5998ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:13.184818  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.035542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.185104  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0916 23:12:13.196309  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.196351  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.196398  108988 httplog.go:90] GET /healthz: (1.509283ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.203881  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (2.155271ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.224759  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.963285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.225140  108988 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0916 23:12:13.243451  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.770336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.246758  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.508221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.265671  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.86733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.266029  108988 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0916 23:12:13.280953  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.280995  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.281089  108988 httplog.go:90] GET /healthz: (1.654238ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:13.283033  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.395251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.287344  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.788537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.296626  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.296685  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.296749  108988 httplog.go:90] GET /healthz: (1.811111ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.305061  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.42133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.305632  108988 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0916 23:12:13.323624  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.895331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.329499  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.169165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.345095  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.277072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.345646  108988 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0916 23:12:13.363719  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (2.008768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.366171  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.843658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.380948  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.380994  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.381064  108988 httplog.go:90] GET /healthz: (1.607736ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:13.384588  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.925298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.384989  108988 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0916 23:12:13.396206  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.396242  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.396289  108988 httplog.go:90] GET /healthz: (1.330054ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.403446  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.69454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.405821  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.625275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.425147  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.359831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.425810  108988 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0916 23:12:13.443818  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.934327ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.447142  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.677748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.464637  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.874581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.465381  108988 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0916 23:12:13.483582  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.483617  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.483673  108988 httplog.go:90] GET /healthz: (4.423995ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:13.487014  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (4.080907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.489365  108988 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.744206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.496769  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.496804  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.496877  108988 httplog.go:90] GET /healthz: (1.929081ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.505157  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.46233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.505484  108988 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0916 23:12:13.523686  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.936684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.526241  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.817191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.545183  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.415249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.545555  108988 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0916 23:12:13.563760  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.942627ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.566302  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.902478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.592573  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.592616  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.592712  108988 httplog.go:90] GET /healthz: (13.350181ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:13.595822  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (10.853769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.596571  108988 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0916 23:12:13.597179  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.597209  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.597252  108988 httplog.go:90] GET /healthz: (2.384924ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.603042  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.438331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.605241  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.744645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.625491  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.651618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.625931  108988 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0916 23:12:13.644444  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (2.148474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.646924  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.884692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.664462  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.678124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.664889  108988 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0916 23:12:13.683459  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.683492  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.683537  108988 httplog.go:90] GET /healthz: (4.197949ms) 0 [Go-http-client/1.1 127.0.0.1:55654]
I0916 23:12:13.684089  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.667915ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.686469  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.681661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.700869  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.700908  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.700981  108988 httplog.go:90] GET /healthz: (2.889948ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.704597  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.867759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.705266  108988 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0916 23:12:13.723583  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.790285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.726177  108988 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.99522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.746770  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.60904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.747132  108988 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0916 23:12:13.763060  108988 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.41916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.765236  108988 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.670412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.780708  108988 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 23:12:13.780767  108988 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 23:12:13.780814  108988 httplog.go:90] GET /healthz: (1.501104ms) 0 [Go-http-client/1.1 127.0.0.1:55656]
I0916 23:12:13.784271  108988 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.521636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.784793  108988 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0916 23:12:13.796443  108988 httplog.go:90] GET /healthz: (1.416473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.798357  108988 httplog.go:90] GET /api/v1/namespaces/default: (1.444647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.801095  108988 httplog.go:90] POST /api/v1/namespaces: (1.92866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.803194  108988 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.255267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.809137  108988 httplog.go:90] POST /api/v1/namespaces/default/services: (5.402277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.810871  108988 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.331655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.813893  108988 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.602486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
E0916 23:12:13.868472  108988 factory.go:590] Error getting pod permit-plugincda76952-bee6-4dc3-b445-7b3eea618871/test-pod for retry: Get http://127.0.0.1:45265/api/v1/namespaces/permit-plugincda76952-bee6-4dc3-b445-7b3eea618871/pods/test-pod: dial tcp 127.0.0.1:45265: connect: connection refused; retrying...
I0916 23:12:13.881144  108988 httplog.go:90] GET /healthz: (1.722231ms) 200 [Go-http-client/1.1 127.0.0.1:55656]
W0916 23:12:13.882063  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882125  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882140  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882175  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882191  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882203  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882212  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882226  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882253  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882264  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 23:12:13.882345  108988 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0916 23:12:13.882372  108988 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0916 23:12:13.882389  108988 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0916 23:12:13.882627  108988 shared_informer.go:197] Waiting for caches to sync for scheduler
I0916 23:12:13.882912  108988 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0916 23:12:13.882938  108988 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0916 23:12:13.884238  108988 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (962.406µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:13.885275  108988 get.go:251] Starting watch for /api/v1/pods, rv=30612 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=8m58s
I0916 23:12:13.982791  108988 shared_informer.go:227] caches populated
I0916 23:12:13.982833  108988 shared_informer.go:204] Caches are synced for scheduler 
I0916 23:12:13.983184  108988 reflector.go:120] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.983208  108988 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.983778  108988 reflector.go:120] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.983797  108988 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.983888  108988 reflector.go:120] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.983914  108988 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984105  108988 reflector.go:120] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984131  108988 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984305  108988 reflector.go:120] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984321  108988 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984358  108988 reflector.go:120] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984370  108988 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984678  108988 reflector.go:120] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984694  108988 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984771  108988 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984786  108988 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984808  108988 reflector.go:120] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.984821  108988 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.987415  108988 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (885.96µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55796]
I0916 23:12:13.987625  108988 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.987645  108988 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0916 23:12:13.988268  108988 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (707.321µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55782]
I0916 23:12:13.988550  108988 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (547.945µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.988763  108988 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (398.552µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55784]
I0916 23:12:13.989898  108988 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (420.613µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:13.990485  108988 get.go:251] Starting watch for /api/v1/services, rv=30888 labels= fields= timeout=6m30s
I0916 23:12:13.990495  108988 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30619 labels= fields= timeout=6m56s
I0916 23:12:13.991033  108988 get.go:251] Starting watch for /api/v1/nodes, rv=30612 labels= fields= timeout=5m46s
I0916 23:12:13.991340  108988 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30621 labels= fields= timeout=6m20s
I0916 23:12:13.992057  108988 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30621 labels= fields= timeout=8m51s
I0916 23:12:13.993421  108988 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (457.512µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55790]
I0916 23:12:13.994075  108988 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30623 labels= fields= timeout=5m58s
I0916 23:12:13.994246  108988 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (1.969625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55792]
I0916 23:12:13.994804  108988 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30613 labels= fields= timeout=6m2s
I0916 23:12:13.995454  108988 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (384.144µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55794]
I0916 23:12:13.996089  108988 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30612 labels= fields= timeout=9m22s
I0916 23:12:13.997274  108988 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (6.377753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55788]
I0916 23:12:13.998079  108988 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (413.583µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55786]
I0916 23:12:13.999298  108988 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30623 labels= fields= timeout=8m52s
I0916 23:12:13.999467  108988 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30612 labels= fields= timeout=7m14s
I0916 23:12:14.083087  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083125  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083132  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083138  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083145  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083151  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083157  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083163  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083169  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083180  108988 shared_informer.go:227] caches populated
I0916 23:12:14.083192  108988 shared_informer.go:227] caches populated
I0916 23:12:14.086516  108988 httplog.go:90] POST /api/v1/nodes: (2.658731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.086932  108988 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0916 23:12:14.090323  108988 httplog.go:90] PUT /api/v1/nodes/testnode/status: (2.89273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.093100  108988 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods: (2.307091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.093221  108988 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pidpressure-fake-name
I0916 23:12:14.093240  108988 scheduler.go:530] Attempting to schedule pod: node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pidpressure-fake-name
I0916 23:12:14.093427  108988 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pidpressure-fake-name", node "testnode"
I0916 23:12:14.093448  108988 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0916 23:12:14.093505  108988 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0916 23:12:14.096183  108988 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name/binding: (2.128949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.096403  108988 scheduler.go:662] pod node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0916 23:12:14.098629  108988 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/events: (1.731984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.195913  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.106282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.296477  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.540901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.397746  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.849841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.496384  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.59544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.596035  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.213518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.696137  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.272879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.797037  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.317231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.895993  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.105326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.989314  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:14.989418  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:14.990115  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:14.990229  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:14.995918  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:14.996014  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.256384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:14.999114  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:15.095876  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.025532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.198987  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (4.309177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.298138  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.794881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.396416  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.522973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.496289  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.355864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.596795  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.837675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.696159  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.300132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.796211  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.335523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.898511  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.178233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.989537  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:15.989925  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:15.990331  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:15.990363  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:15.996088  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.186606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:15.996629  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:15.999280  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:16.096073  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.174554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.197813  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.955776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.295955  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.069304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.396174  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.243464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.497517  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.307439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.596243  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.269123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.696285  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.366039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.796199  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.201091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.896283  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.263997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.989774  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:16.990097  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:16.990499  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:16.990585  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:16.996256  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.32346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:16.996913  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:16.999461  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:17.096345  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.395034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.196008  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.097531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.295785  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.874292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.396280  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.407366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.499110  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (4.616384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.595706  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.911956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.695979  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.142584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.796198  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.19767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.895933  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.188662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.990979  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:17.991016  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:17.991036  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:17.995008  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:17.997083  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:17.998418  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (4.669434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:17.999638  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:18.097231  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.412914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.196274  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.405286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.296635  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.668299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.396024  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.193535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.496054  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.231016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.597455  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.318934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.696555  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.685991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.796098  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.306134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.896153  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.326235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.991318  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:18.992323  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:18.992566  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:18.995282  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:18.996348  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.553308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:18.997256  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:18.999804  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:19.098777  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.150852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.196277  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.462376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.296563  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.666635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.396240  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.312504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.496328  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.493718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.597035  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.220072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.696365  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.517727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.796308  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.310084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.896659  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.591291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.991490  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:19.992667  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:19.992860  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:19.995439  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:19.996159  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.339174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:19.997428  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:19.999998  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:20.096317  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.398229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.195785  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.008942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.296197  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.244179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.396276  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.340937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.496556  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.698303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.596410  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.556444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.696951  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.664688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.796151  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.240948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.896045  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.196114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.991698  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:20.992834  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:20.993018  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:20.995976  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.038867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:20.996366  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:20.997603  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:21.000183  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:21.096099  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.191159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.195990  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.16079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.296061  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.135752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.395956  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.139409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.495949  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.104332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.596787  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.960732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.696316  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.389758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.796429  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.583817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.896194  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.282881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.991923  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:21.993026  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:21.993284  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:21.996064  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.193695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:21.996526  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:21.997765  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:22.000336  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:22.096404  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.445496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.196028  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.116623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.296158  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.242088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.396097  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.209514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.496465  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.431145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.596525  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.535379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.696192  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.251877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.796241  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.310759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.898340  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.707444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.992124  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:22.993205  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:22.994539  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:22.996209  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.441454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:22.996671  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:22.999939  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:23.000464  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:23.096080  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.220846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.196188  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.42586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.296042  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.142913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
E0916 23:12:23.328073  108988 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:45265/apis/events.k8s.io/v1beta1/namespaces/permit-plugincda76952-bee6-4dc3-b445-7b3eea618871/events: dial tcp 127.0.0.1:45265: connect: connection refused' (may retry after sleeping)
I0916 23:12:23.397691  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.56501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.496307  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.490105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.596629  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.81341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.696185  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.286573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.796009  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.036789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.798553  108988 httplog.go:90] GET /api/v1/namespaces/default: (1.502018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.800489  108988 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.605018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.801859  108988 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.086935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.896267  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.368134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.995118  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:23.995485  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:23.995537  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:23.998092  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.332455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:23.998365  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:24.003996  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:24.004064  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:24.096711  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.513488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:24.198444  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.696531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:24.296242  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.306047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:24.397355  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.428595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:24.495790  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.028741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:24.596333  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.317844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:24.695781  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.937929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:24.795918  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.089659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:24.896595  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.655319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:24.995271  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:24.995641  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:24.996874  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:25.000152  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:25.003540  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (7.536919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.004164  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:25.004174  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:25.096501  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.640991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.196011  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.216711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.296148  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.205241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.395928  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.098331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.496904  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.900114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.596556  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.5568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.696297  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.332776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.796035  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.171487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.895815  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.923333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.995527  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:25.995798  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:25.996183  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.325335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:25.997074  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:26.000304  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:26.004288  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:26.004612  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:26.096395  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.437096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:26.196241  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.353649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:26.301152  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (7.005372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:26.395638  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.898728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:26.496320  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.490333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:26.596381  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.445619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:26.696400  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.502612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:26.796315  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.390749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:26.896758  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.304331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:26.995694  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:26.996031  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:26.998007  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:26.998474  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (4.580811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.000479  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:27.004470  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:27.004779  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:27.096502  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.579599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.195893  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.095542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.296186  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.160916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.396083  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.140362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.496165  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.349122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.596494  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.492658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.696078  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.221966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.796649  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.640952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.896519  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.563675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.996103  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:27.996350  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:27.996515  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.7105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:27.998111  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:28.000663  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:28.004675  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:28.004957  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:28.110253  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.51839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.200397  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (6.000131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.296698  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.785166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.396693  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.851175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.496614  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.622976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.596334  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.500746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.695874  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.012019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.797066  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.184631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.896330  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.41472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.996287  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.482785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:28.996703  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:28.997399  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:28.998317  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:29.000872  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:29.004864  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:29.005132  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:29.097673  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.717755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.196769  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.870333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.302580  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.989269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.396227  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.295726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.497712  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.566326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.596361  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.411279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.696309  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.485541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.796509  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.49912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.896051  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.222795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.996510  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.6655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:29.996872  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:29.997574  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:29.998462  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:30.001046  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:30.005010  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:30.005279  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:30.098745  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.266861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.196970  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.040142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.296199  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.286703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.396278  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.299441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.496196  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.277987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.596226  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.375167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.696122  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.272008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.795879  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.024396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.895669  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.923586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.996214  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.304755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:30.997042  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:30.997718  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:30.998644  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:31.001227  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:31.005190  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:31.005439  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:31.096106  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.19361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:31.196222  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.240088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:31.296073  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.212791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:31.396206  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.291132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:31.496436  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.441211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:31.596201  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.383517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:31.696136  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.29669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:31.796124  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.212045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:31.896272  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.369437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:31.997218  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:31.997884  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:31.998866  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:32.006205  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:32.006260  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:32.007949  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:32.046772  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (52.845701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.116343  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (17.95033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.195572  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.724058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.296161  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.33788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.396359  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.34658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.495992  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.028271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.598744  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.404456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.696262  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.420843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.796755  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.353699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.902950  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (9.089673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.997424  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:32.997533  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.314102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:32.998080  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:32.999017  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:33.006405  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:33.006620  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:33.010170  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:33.110279  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (16.460085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:33.204718  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (10.713879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:33.296128  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.020358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:33.396942  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.390887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:33.496454  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.48903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:33.596646  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.043627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:33.696996  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.378529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:33.796690  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.673718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55800]
I0916 23:12:33.800250  108988 httplog.go:90] GET /api/v1/namespaces/default: (2.48389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:33.803522  108988 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.608155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:33.808860  108988 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (4.854607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:33.899289  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.527488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:33.996279  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.360393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:33.997699  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:33.998323  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:33.999163  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:34.006650  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:34.007154  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:34.010955  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
E0916 23:12:34.067390  108988 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:45265/apis/events.k8s.io/v1beta1/namespaces/permit-plugincda76952-bee6-4dc3-b445-7b3eea618871/events: dial tcp 127.0.0.1:45265: connect: connection refused' (may retry after sleeping)
I0916 23:12:34.096211  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.33713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.195931  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.100706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.296483  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.505136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.401366  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (7.53093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.496065  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.231885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.597612  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.448996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.695782  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.941479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.795888  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.060812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.896169  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.349232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.996370  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.569725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:34.997861  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:34.998431  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:34.999345  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:35.006879  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:35.007298  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:35.011181  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:35.098346  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.250843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.197732  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.620179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.297150  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.324027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.396164  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.343887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.496183  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.928052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.595987  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.103068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.696413  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.415632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.795793  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.889756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.896898  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.083405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.996400  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.469595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:35.998024  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:35.998548  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:35.999536  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:36.007082  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:36.007444  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:36.011319  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:36.096077  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.179852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.195789  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.997662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.296348  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.51706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.396041  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.252818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.496148  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.17281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.596122  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.20686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.699660  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (5.633277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.796262  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.401238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.896042  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.228168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.996890  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.054587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:36.998197  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:36.998671  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:37.000257  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:37.007322  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:37.007705  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:37.011521  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:37.096167  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.195616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.196371  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.423501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.295825  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.744731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.396296  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.366057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.496359  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.341225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.595593  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.746112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.697922  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.985982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.797066  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.249177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.896274  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.347546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.996374  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.614609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:37.998368  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:37.998803  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:38.000410  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:38.007546  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:38.008690  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:38.011681  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:38.096064  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.16838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.196193  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.390509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.296178  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.341104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.396346  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.411323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.496004  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.1093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.596273  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.354929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.695968  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.02199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.795795  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.896534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.895657  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.741994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.995748  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.906464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:38.998545  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:38.998953  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:39.000528  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:39.007756  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:39.009537  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:39.012016  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:39.096270  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.336198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:39.195909  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.065886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:39.296544  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.544399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:39.396376  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.578668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
E0916 23:12:39.469589  108988 factory.go:590] Error getting pod permit-plugincda76952-bee6-4dc3-b445-7b3eea618871/test-pod for retry: Get http://127.0.0.1:45265/api/v1/namespaces/permit-plugincda76952-bee6-4dc3-b445-7b3eea618871/pods/test-pod: dial tcp 127.0.0.1:45265: connect: connection refused; retrying...
I0916 23:12:39.496441  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.541546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:39.599774  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (4.705751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:39.696630  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.4673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:39.797080  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.253355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:39.896537  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.639492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:39.995613  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.864915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:39.998741  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:39.999128  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:40.000740  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:40.007992  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:40.009763  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:40.012215  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:40.096435  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.464549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.196323  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.491902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.296061  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.088993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.396096  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.283316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.497320  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (3.525779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.596107  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.296267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.696525  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.554552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.796454  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.262596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.895969  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.86661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.995875  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.016417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:40.998898  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:40.999283  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:41.001031  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:41.008252  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:41.009948  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:41.012391  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:41.095903  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.130175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.196559  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.722348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.295891  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.954465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.396444  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.526514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.496156  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.214119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.596050  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.198963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.696147  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.050461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.796212  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.249038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.912348  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (18.494244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.996310  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.310306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:41.999053  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:41.999456  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:42.001247  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:42.008464  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:42.010281  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:42.012939  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:42.099435  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (5.509279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.199816  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.924284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.296210  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.317082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.396211  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.285126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.501359  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.048724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.616283  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (22.437522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.698218  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (4.367684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.796091  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.270352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.896655  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.761719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.996599  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.817216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:42.999240  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:42.999634  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:43.001612  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:43.008701  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:43.010428  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:43.013154  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:43.096082  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.31828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.195952  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.120945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.296765  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.83742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.396833  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.910545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.496560  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.677225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.601073  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.743961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.705331  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (11.410494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.796353  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.404386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.804517  108988 httplog.go:90] GET /api/v1/namespaces/default: (7.110718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.821785  108988 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.686668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.826779  108988 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.919115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.895833  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.83514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.996744  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (2.906241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:43.999578  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:43.999805  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:44.002350  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:44.008936  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:44.010631  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:44.013338  108988 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 23:12:44.096102  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.876534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:44.098317  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.595315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:44.106183  108988 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (7.290564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:44.109069  108988 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure5ed218e3-d590-41e1-a602-733b0f2b2839/pods/pidpressure-fake-name: (1.36928ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
E0916 23:12:44.110220  108988 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0916 23:12:44.110330  108988 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m56s&timeoutSeconds=416&watch=true: (30.119998806s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55654]
I0916 23:12:44.110512  108988 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30888&timeout=6m30s&timeoutSeconds=390&watch=true: (30.120378962s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55798]
I0916 23:12:44.110635  108988 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30621&timeout=6m20s&timeoutSeconds=380&watch=true: (30.119902144s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55796]
I0916 23:12:44.110642  108988 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30621&timeout=8m51s&timeoutSeconds=531&watch=true: (30.118813282s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55784]
I0916 23:12:44.110761  108988 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30613&timeout=6m2s&timeoutSeconds=362&watch=true: (30.116231926s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55792]
I0916 23:12:44.110784  108988 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30623&timeout=5m58s&timeoutSeconds=358&watch=true: (30.116953634s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55790]
I0916 23:12:44.110902  108988 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30612&timeout=9m22s&timeoutSeconds=562&watch=true: (30.115071734s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55794]
I0916 23:12:44.111027  108988 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30612&timeout=7m14s&timeoutSeconds=434&watch=true: (30.112008647s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55788]
I0916 23:12:44.111052  108988 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30623&timeout=8m52s&timeoutSeconds=532&watch=true: (30.112005982s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55786]
I0916 23:12:44.111146  108988 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30612&timeout=5m46s&timeoutSeconds=346&watch=true: (30.12040939s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55782]
I0916 23:12:44.111179  108988 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30612&timeoutSeconds=538&watch=true: (30.226366856s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55656]
I0916 23:12:44.115347  108988 httplog.go:90] DELETE /api/v1/nodes: (5.785136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:44.115796  108988 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0916 23:12:44.117650  108988 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.489603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0916 23:12:44.120287  108988 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.135896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
--- FAIL: TestNodePIDPressure (33.94s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190916-230338.xml

Find permit-plugincda76952-bee6-4dc3-b445-7b3eea618871/test-pod mentions in log files | View test history on testgrid


Show 2861 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 829 lines ...
W0916 22:58:32.937] I0916 22:58:32.935709   52967 endpoints_controller.go:176] Starting endpoint controller
W0916 22:58:32.937] I0916 22:58:32.936396   52967 shared_informer.go:197] Waiting for caches to sync for endpoint
W0916 22:58:32.937] I0916 22:58:32.933088   52967 cronjob_controller.go:96] Starting CronJob Manager
W0916 22:58:32.937] I0916 22:58:32.935757   52967 gc_controller.go:75] Starting GC controller
W0916 22:58:32.938] I0916 22:58:32.936453   52967 shared_informer.go:197] Waiting for caches to sync for GC
W0916 22:58:32.938] I0916 22:58:32.935930   52967 shared_informer.go:197] Waiting for caches to sync for expand
W0916 22:58:32.938] E0916 22:58:32.937141   52967 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0916 22:58:32.938] W0916 22:58:32.937361   52967 controllermanager.go:526] Skipping "service"
W0916 22:58:32.939] I0916 22:58:32.937829   52967 controllermanager.go:534] Started "clusterrole-aggregation"
W0916 22:58:32.939] I0916 22:58:32.937894   52967 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0916 22:58:32.939] I0916 22:58:32.938141   52967 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
W0916 22:58:32.939] I0916 22:58:32.938664   52967 controllermanager.go:534] Started "pvc-protection"
W0916 22:58:32.940] I0916 22:58:32.938696   52967 pvc_protection_controller.go:100] Starting PVC protection controller
... skipping 36 lines ...
W0916 22:58:33.171] I0916 22:58:33.146825   52967 shared_informer.go:197] Waiting for caches to sync for persistent volume
W0916 22:58:33.172] W0916 22:58:33.147337   52967 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0916 22:58:33.172] I0916 22:58:33.148303   52967 controllermanager.go:534] Started "attachdetach"
W0916 22:58:33.172] I0916 22:58:33.148464   52967 attach_detach_controller.go:334] Starting attach detach controller
W0916 22:58:33.172] I0916 22:58:33.148494   52967 shared_informer.go:197] Waiting for caches to sync for attach detach
W0916 22:58:33.173] I0916 22:58:33.148697   52967 node_lifecycle_controller.go:77] Sending events to api server
W0916 22:58:33.173] E0916 22:58:33.148806   52967 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0916 22:58:33.173] W0916 22:58:33.148817   52967 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0916 22:58:33.173] I0916 22:58:33.158483   52967 controllermanager.go:534] Started "namespace"
W0916 22:58:33.173] I0916 22:58:33.158948   52967 namespace_controller.go:186] Starting namespace controller
W0916 22:58:33.174] I0916 22:58:33.159215   52967 shared_informer.go:197] Waiting for caches to sync for namespace
W0916 22:58:33.174] I0916 22:58:33.159245   52967 controllermanager.go:534] Started "serviceaccount"
W0916 22:58:33.174] I0916 22:58:33.160290   52967 controllermanager.go:534] Started "replicaset"
... skipping 39 lines ...
W0916 22:58:33.636] I0916 22:58:33.572254   52967 node_lifecycle_controller.go:495] Starting node controller
W0916 22:58:33.636] I0916 22:58:33.572291   52967 shared_informer.go:197] Waiting for caches to sync for taint
W0916 22:58:33.636] I0916 22:58:33.572647   52967 controllermanager.go:534] Started "daemonset"
W0916 22:58:33.637] I0916 22:58:33.572698   52967 daemon_controller.go:267] Starting daemon sets controller
W0916 22:58:33.637] I0916 22:58:33.572729   52967 shared_informer.go:197] Waiting for caches to sync for daemon sets
W0916 22:58:33.637] W0916 22:58:33.572760   52967 controllermanager.go:526] Skipping "csrsigning"
W0916 22:58:33.637] W0916 22:58:33.609119   52967 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0916 22:58:33.637] I0916 22:58:33.632577   52967 shared_informer.go:204] Caches are synced for certificate-csrapproving 
W0916 22:58:33.638] I0916 22:58:33.638367   52967 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0916 22:58:33.656] E0916 22:58:33.656038   52967 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0916 22:58:33.660] I0916 22:58:33.659563   52967 shared_informer.go:204] Caches are synced for namespace 
W0916 22:58:33.661] E0916 22:58:33.661101   52967 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0916 22:58:33.662] I0916 22:58:33.662002   52967 shared_informer.go:204] Caches are synced for service account 
W0916 22:58:33.662] I0916 22:58:33.662092   52967 shared_informer.go:204] Caches are synced for PV protection 
W0916 22:58:33.665] I0916 22:58:33.665330   49450 controller.go:606] quota admission added evaluator for: serviceaccounts
W0916 22:58:33.670] E0916 22:58:33.670335   52967 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0916 22:58:33.671] I0916 22:58:33.671297   52967 shared_informer.go:204] Caches are synced for TTL 
W0916 22:58:33.732] I0916 22:58:33.732151   52967 shared_informer.go:204] Caches are synced for stateful set 
W0916 22:58:33.739] I0916 22:58:33.739002   52967 shared_informer.go:204] Caches are synced for GC 
W0916 22:58:33.740] I0916 22:58:33.739281   52967 shared_informer.go:204] Caches are synced for PVC protection 
W0916 22:58:33.763] I0916 22:58:33.763213   52967 shared_informer.go:204] Caches are synced for ReplicaSet 
W0916 22:58:33.770] I0916 22:58:33.769453   52967 shared_informer.go:204] Caches are synced for job 
... skipping 88 lines ...
I0916 22:58:37.258] +++ working dir: /go/src/k8s.io/kubernetes
I0916 22:58:37.261] +++ command: run_RESTMapper_evaluation_tests
I0916 22:58:37.272] +++ [0916 22:58:37] Creating namespace namespace-1568674717-8844
I0916 22:58:37.344] namespace/namespace-1568674717-8844 created
I0916 22:58:37.410] Context "test" modified.
I0916 22:58:37.416] +++ [0916 22:58:37] Testing RESTMapper
I0916 22:58:37.519] +++ [0916 22:58:37] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0916 22:58:37.533] +++ exit code: 0
I0916 22:58:37.654] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0916 22:58:37.654] bindings                                                                      true         Binding
I0916 22:58:37.655] componentstatuses                 cs                                          false        ComponentStatus
I0916 22:58:37.655] configmaps                        cm                                          true         ConfigMap
I0916 22:58:37.655] endpoints                         ep                                          true         Endpoints
... skipping 616 lines ...
I0916 22:58:57.950] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0916 22:58:58.044] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0916 22:58:58.167] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0916 22:58:58.260] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0916 22:58:58.427] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:58:58.635] (Bpod/env-test-pod created
W0916 22:58:58.736] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0916 22:58:58.737] error: setting 'all' parameter but found a non empty selector. 
W0916 22:58:58.737] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 22:58:58.738] I0916 22:58:57.623889   49450 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0916 22:58:58.738] error: min-available and max-unavailable cannot be both specified
I0916 22:58:58.849] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0916 22:58:58.850] Name:         env-test-pod
I0916 22:58:58.850] Namespace:    test-kubectl-describe-pod
I0916 22:58:58.850] Priority:     0
I0916 22:58:58.850] Node:         <none>
I0916 22:58:58.850] Labels:       <none>
... skipping 174 lines ...
I0916 22:59:12.393] (Bpod/valid-pod patched
I0916 22:59:12.497] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0916 22:59:12.575] (Bpod/valid-pod patched
I0916 22:59:12.691] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0916 22:59:12.868] (Bpod/valid-pod patched
I0916 22:59:12.975] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0916 22:59:13.162] (B+++ [0916 22:59:13] "kubectl patch with resourceVersion 500" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0916 22:59:13.419] pod "valid-pod" deleted
I0916 22:59:13.431] pod/valid-pod replaced
I0916 22:59:13.533] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0916 22:59:13.718] (BSuccessful
I0916 22:59:13.718] message:error: --grace-period must have --force specified
I0916 22:59:13.719] has:\-\-grace-period must have \-\-force specified
I0916 22:59:13.875] Successful
I0916 22:59:13.875] message:error: --timeout must have --force specified
I0916 22:59:13.875] has:\-\-timeout must have \-\-force specified
I0916 22:59:14.040] node/node-v1-test created
W0916 22:59:14.141] W0916 22:59:14.040589   52967 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0916 22:59:14.242] node/node-v1-test replaced
I0916 22:59:14.327] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0916 22:59:14.402] (Bnode "node-v1-test" deleted
I0916 22:59:14.500] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0916 22:59:14.781] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0916 22:59:15.821] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 33 lines ...
I0916 22:59:16.985] Context "test" modified.
W0916 22:59:17.086] I0916 22:59:14.276259   52967 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"579776b7-32a6-45c4-bab0-8ce616b7c697", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
W0916 22:59:17.087] Edit cancelled, no changes made.
W0916 22:59:17.087] Edit cancelled, no changes made.
W0916 22:59:17.088] Edit cancelled, no changes made.
W0916 22:59:17.088] Edit cancelled, no changes made.
W0916 22:59:17.088] error: 'name' already has a value (valid-pod), and --overwrite is false
W0916 22:59:17.088] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 22:59:17.189] core.sh:610: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:17.258] (Bpod/redis-master created
I0916 22:59:17.261] pod/valid-pod created
I0916 22:59:17.359] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0916 22:59:17.451] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
... skipping 76 lines ...
I0916 22:59:23.957] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0916 22:59:23.960] +++ working dir: /go/src/k8s.io/kubernetes
I0916 22:59:23.962] +++ command: run_kubectl_create_error_tests
I0916 22:59:23.974] +++ [0916 22:59:23] Creating namespace namespace-1568674763-19173
I0916 22:59:24.043] namespace/namespace-1568674763-19173 created
I0916 22:59:24.116] Context "test" modified.
I0916 22:59:24.123] +++ [0916 22:59:24] Testing kubectl create with error
W0916 22:59:24.224] Error: must specify one of -f and -k
W0916 22:59:24.224] 
W0916 22:59:24.224] Create a resource from a file or from stdin.
W0916 22:59:24.225] 
W0916 22:59:24.225]  JSON and YAML formats are accepted.
W0916 22:59:24.225] 
W0916 22:59:24.225] Examples:
... skipping 41 lines ...
W0916 22:59:24.233] 
W0916 22:59:24.233] Usage:
W0916 22:59:24.233]   kubectl create -f FILENAME [options]
W0916 22:59:24.233] 
W0916 22:59:24.234] Use "kubectl <command> --help" for more information about a given command.
W0916 22:59:24.234] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0916 22:59:24.373] +++ [0916 22:59:24] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0916 22:59:24.473] kubectl convert is DEPRECATED and will be removed in a future version.
W0916 22:59:24.474] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0916 22:59:24.574] +++ exit code: 0
I0916 22:59:24.626] Recording: run_kubectl_apply_tests
I0916 22:59:24.626] Running command: run_kubectl_apply_tests
I0916 22:59:24.649] 
... skipping 16 lines ...
I0916 22:59:26.315] apply.sh:276: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0916 22:59:26.403] (Bpod "test-pod" deleted
I0916 22:59:26.649] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0916 22:59:26.966] I0916 22:59:26.966195   49450 client.go:361] parsed scheme: "endpoint"
W0916 22:59:26.967] I0916 22:59:26.966249   49450 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0916 22:59:26.970] I0916 22:59:26.970321   49450 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0916 22:59:27.058] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0916 22:59:27.159] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0916 22:59:27.159] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0916 22:59:27.185] +++ exit code: 0
I0916 22:59:27.349] Recording: run_kubectl_run_tests
I0916 22:59:27.350] Running command: run_kubectl_run_tests
I0916 22:59:27.373] 
... skipping 92 lines ...
I0916 22:59:29.867] Context "test" modified.
I0916 22:59:29.874] +++ [0916 22:59:29] Testing kubectl create filter
I0916 22:59:29.960] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:30.129] (Bpod/selector-test-pod created
I0916 22:59:30.225] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0916 22:59:30.313] (BSuccessful
I0916 22:59:30.314] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0916 22:59:30.314] has:pods "selector-test-pod-dont-apply" not found
I0916 22:59:30.400] pod "selector-test-pod" deleted
I0916 22:59:30.420] +++ exit code: 0
I0916 22:59:30.454] Recording: run_kubectl_apply_deployments_tests
I0916 22:59:30.454] Running command: run_kubectl_apply_deployments_tests
I0916 22:59:30.478] 
... skipping 23 lines ...
I0916 22:59:31.769] apps.sh:130: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1
I0916 22:59:31.865] (Bapps.sh:131: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1
I0916 22:59:31.960] (Bapps.sh:132: Successful get deployments my-depl {{.metadata.labels.l1}}: <no value>
I0916 22:59:32.055] (Bdeployment.apps "my-depl" deleted
I0916 22:59:32.063] replicaset.apps "my-depl-64b97f7d4d" deleted
I0916 22:59:32.072] pod "my-depl-64b97f7d4d-h44zn" deleted
W0916 22:59:32.173] E0916 22:59:32.073700   52967 replica_set.go:450] Sync "namespace-1568674770-9266/my-depl-64b97f7d4d" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-64b97f7d4d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1568674770-9266/my-depl-64b97f7d4d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: dbec9628-5e9d-4a5b-979e-5327fdb415b4, UID in object meta: 
I0916 22:59:32.274] apps.sh:138: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:32.302] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:32.399] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:32.495] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:32.679] (Bdeployment.apps/nginx created
W0916 22:59:32.779] I0916 22:59:32.682777   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568674770-9266", Name:"nginx", UID:"459a1181-c986-4080-ac7f-6e9eaaa3b1c0", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0916 22:59:32.780] I0916 22:59:32.686731   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674770-9266", Name:"nginx-8484dd655", UID:"0dc9c5c2-7d72-48ae-8f07-6422a042685b", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-wsq9r
W0916 22:59:32.781] I0916 22:59:32.690480   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674770-9266", Name:"nginx-8484dd655", UID:"0dc9c5c2-7d72-48ae-8f07-6422a042685b", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-kq259
W0916 22:59:32.781] I0916 22:59:32.691365   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674770-9266", Name:"nginx-8484dd655", UID:"0dc9c5c2-7d72-48ae-8f07-6422a042685b", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-lbvbp
I0916 22:59:32.881] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0916 22:59:37.018] (BSuccessful
I0916 22:59:37.019] message:Error from server (Conflict): error when applying patch:
I0916 22:59:37.019] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568674770-9266\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0916 22:59:37.019] to:
I0916 22:59:37.019] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0916 22:59:37.020] Name: "nginx", Namespace: "namespace-1568674770-9266"
I0916 22:59:37.021] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568674770-9266\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-16T22:59:32Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568674770-9266" "resourceVersion":"594" "selfLink":"/apis/apps/v1/namespaces/namespace-1568674770-9266/deployments/nginx" "uid":"459a1181-c986-4080-ac7f-6e9eaaa3b1c0"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-16T22:59:32Z" "lastUpdateTime":"2019-09-16T22:59:32Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-16T22:59:32Z" "lastUpdateTime":"2019-09-16T22:59:32Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0916 22:59:37.022] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0916 22:59:37.022] has:Error from server (Conflict)
W0916 22:59:38.290] I0916 22:59:38.289573   52967 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568674761-29367
I0916 22:59:42.235] deployment.apps/nginx configured
W0916 22:59:42.336] I0916 22:59:42.242757   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568674770-9266", Name:"nginx", UID:"da0c9883-6f58-4652-96ca-90eb20b3cbf1", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W0916 22:59:42.337] I0916 22:59:42.247632   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674770-9266", Name:"nginx-668b6c7744", UID:"9a4e90a5-2f63-414a-96cc-78026cb5c6a9", APIVersion:"apps/v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-ldpsg
W0916 22:59:42.338] I0916 22:59:42.253426   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674770-9266", Name:"nginx-668b6c7744", UID:"9a4e90a5-2f63-414a-96cc-78026cb5c6a9", APIVersion:"apps/v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-v8mb6
W0916 22:59:42.338] I0916 22:59:42.253495   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674770-9266", Name:"nginx-668b6c7744", UID:"9a4e90a5-2f63-414a-96cc-78026cb5c6a9", APIVersion:"apps/v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-5gpvw
... skipping 142 lines ...
I0916 22:59:49.571] +++ [0916 22:59:49] Creating namespace namespace-1568674789-23549
I0916 22:59:49.642] namespace/namespace-1568674789-23549 created
I0916 22:59:49.710] Context "test" modified.
I0916 22:59:49.717] +++ [0916 22:59:49] Testing kubectl get
I0916 22:59:49.803] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:49.888] (BSuccessful
I0916 22:59:49.888] message:Error from server (NotFound): pods "abc" not found
I0916 22:59:49.888] has:pods "abc" not found
I0916 22:59:49.975] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:50.065] (BSuccessful
I0916 22:59:50.066] message:Error from server (NotFound): pods "abc" not found
I0916 22:59:50.066] has:pods "abc" not found
I0916 22:59:50.159] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:50.248] (BSuccessful
I0916 22:59:50.248] message:{
I0916 22:59:50.248]     "apiVersion": "v1",
I0916 22:59:50.249]     "items": [],
... skipping 23 lines ...
I0916 22:59:50.613] has not:No resources found
I0916 22:59:50.706] Successful
I0916 22:59:50.706] message:NAME
I0916 22:59:50.706] has not:No resources found
I0916 22:59:50.794] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:50.891] (BSuccessful
I0916 22:59:50.891] message:error: the server doesn't have a resource type "foobar"
I0916 22:59:50.892] has not:No resources found
I0916 22:59:50.971] Successful
I0916 22:59:50.972] message:No resources found in namespace-1568674789-23549 namespace.
I0916 22:59:50.972] has:No resources found
I0916 22:59:51.057] Successful
I0916 22:59:51.057] message:
I0916 22:59:51.058] has not:No resources found
I0916 22:59:51.141] Successful
I0916 22:59:51.141] message:No resources found in namespace-1568674789-23549 namespace.
I0916 22:59:51.142] has:No resources found
I0916 22:59:51.229] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:51.316] (BSuccessful
I0916 22:59:51.317] message:Error from server (NotFound): pods "abc" not found
I0916 22:59:51.317] has:pods "abc" not found
I0916 22:59:51.318] FAIL!
I0916 22:59:51.319] message:Error from server (NotFound): pods "abc" not found
I0916 22:59:51.319] has not:List
I0916 22:59:51.320] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0916 22:59:51.434] Successful
I0916 22:59:51.435] message:I0916 22:59:51.384541   62958 loader.go:375] Config loaded from file:  /tmp/tmp.tDOm9wq1pj/.kube/config
I0916 22:59:51.435] I0916 22:59:51.386333   62958 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0916 22:59:51.435] I0916 22:59:51.407997   62958 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0916 22:59:57.001] Successful
I0916 22:59:57.002] message:NAME    DATA   AGE
I0916 22:59:57.002] one     0      0s
I0916 22:59:57.002] three   0      0s
I0916 22:59:57.002] two     0      0s
I0916 22:59:57.003] STATUS    REASON          MESSAGE
I0916 22:59:57.003] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 22:59:57.003] has not:watch is only supported on individual resources
I0916 22:59:58.103] Successful
I0916 22:59:58.104] message:STATUS    REASON          MESSAGE
I0916 22:59:58.104] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 22:59:58.104] has not:watch is only supported on individual resources
I0916 22:59:58.109] +++ [0916 22:59:58] Creating namespace namespace-1568674798-20249
I0916 22:59:58.185] namespace/namespace-1568674798-20249 created
I0916 22:59:58.258] Context "test" modified.
I0916 22:59:58.352] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 22:59:58.520] (Bpod/valid-pod created
... skipping 56 lines ...
I0916 22:59:58.616] }
I0916 22:59:58.692] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 22:59:58.928] (B<no value>Successful
I0916 22:59:58.928] message:valid-pod:
I0916 22:59:58.928] has:valid-pod:
I0916 22:59:59.012] Successful
I0916 22:59:59.013] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0916 22:59:59.013] 	template was:
I0916 22:59:59.013] 		{.missing}
I0916 22:59:59.013] 	object given to jsonpath engine was:
I0916 22:59:59.014] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-16T22:59:58Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568674798-20249", "resourceVersion":"698", "selfLink":"/api/v1/namespaces/namespace-1568674798-20249/pods/valid-pod", "uid":"be68744a-9f18-40fe-9c34-d685f8565b93"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0916 22:59:59.014] has:missing is not found
I0916 22:59:59.096] Successful
I0916 22:59:59.096] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0916 22:59:59.096] 	template was:
I0916 22:59:59.096] 		{{.missing}}
I0916 22:59:59.097] 	raw data was:
I0916 22:59:59.097] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-16T22:59:58Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568674798-20249","resourceVersion":"698","selfLink":"/api/v1/namespaces/namespace-1568674798-20249/pods/valid-pod","uid":"be68744a-9f18-40fe-9c34-d685f8565b93"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0916 22:59:59.098] 	object given to template engine was:
I0916 22:59:59.098] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-16T22:59:58Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568674798-20249 resourceVersion:698 selfLink:/api/v1/namespaces/namespace-1568674798-20249/pods/valid-pod uid:be68744a-9f18-40fe-9c34-d685f8565b93] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0916 22:59:59.098] has:map has no entry for key "missing"
W0916 22:59:59.199] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0916 23:00:00.179] Successful
I0916 23:00:00.179] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 23:00:00.180] valid-pod   0/1     Pending   0          1s
I0916 23:00:00.180] STATUS      REASON          MESSAGE
I0916 23:00:00.180] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 23:00:00.180] has:STATUS
I0916 23:00:00.181] Successful
I0916 23:00:00.181] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 23:00:00.182] valid-pod   0/1     Pending   0          1s
I0916 23:00:00.182] STATUS      REASON          MESSAGE
I0916 23:00:00.182] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 23:00:00.182] has:valid-pod
I0916 23:00:01.264] Successful
I0916 23:00:01.264] message:pod/valid-pod
I0916 23:00:01.264] has not:STATUS
I0916 23:00:01.266] Successful
I0916 23:00:01.267] message:pod/valid-pod
... skipping 72 lines ...
I0916 23:00:02.357] status:
I0916 23:00:02.357]   phase: Pending
I0916 23:00:02.357]   qosClass: Guaranteed
I0916 23:00:02.358] ---
I0916 23:00:02.358] has:name: valid-pod
I0916 23:00:02.457] Successful
I0916 23:00:02.457] message:Error from server (NotFound): pods "invalid-pod" not found
I0916 23:00:02.457] has:"invalid-pod" not found
I0916 23:00:02.541] pod "valid-pod" deleted
I0916 23:00:02.648] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:00:02.838] (Bpod/redis-master created
I0916 23:00:02.842] pod/valid-pod created
I0916 23:00:02.939] Successful
... skipping 35 lines ...
I0916 23:00:04.154] +++ command: run_kubectl_exec_pod_tests
I0916 23:00:04.165] +++ [0916 23:00:04] Creating namespace namespace-1568674804-24427
I0916 23:00:04.236] namespace/namespace-1568674804-24427 created
I0916 23:00:04.310] Context "test" modified.
I0916 23:00:04.318] +++ [0916 23:00:04] Testing kubectl exec POD COMMAND
I0916 23:00:04.399] Successful
I0916 23:00:04.400] message:Error from server (NotFound): pods "abc" not found
I0916 23:00:04.400] has:pods "abc" not found
I0916 23:00:04.549] pod/test-pod created
I0916 23:00:04.646] Successful
I0916 23:00:04.646] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 23:00:04.647] has not:pods "test-pod" not found
I0916 23:00:04.648] Successful
I0916 23:00:04.648] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 23:00:04.648] has not:pod or type/name must be specified
I0916 23:00:04.725] pod "test-pod" deleted
I0916 23:00:04.745] +++ exit code: 0
I0916 23:00:04.779] Recording: run_kubectl_exec_resource_name_tests
I0916 23:00:04.780] Running command: run_kubectl_exec_resource_name_tests
I0916 23:00:04.803] 
... skipping 2 lines ...
I0916 23:00:04.810] +++ command: run_kubectl_exec_resource_name_tests
I0916 23:00:04.821] +++ [0916 23:00:04] Creating namespace namespace-1568674804-31151
I0916 23:00:04.894] namespace/namespace-1568674804-31151 created
I0916 23:00:04.963] Context "test" modified.
I0916 23:00:04.970] +++ [0916 23:00:04] Testing kubectl exec TYPE/NAME COMMAND
I0916 23:00:05.067] Successful
I0916 23:00:05.067] message:error: the server doesn't have a resource type "foo"
I0916 23:00:05.067] has:error:
I0916 23:00:05.150] Successful
I0916 23:00:05.150] message:Error from server (NotFound): deployments.apps "bar" not found
I0916 23:00:05.150] has:"bar" not found
I0916 23:00:05.312] pod/test-pod created
I0916 23:00:05.480] replicaset.apps/frontend created
W0916 23:00:05.581] I0916 23:00:05.483447   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674804-31151", Name:"frontend", UID:"10321e05-17b0-4240-a075-020f910e337f", APIVersion:"apps/v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-k2kqh
W0916 23:00:05.582] I0916 23:00:05.486069   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674804-31151", Name:"frontend", UID:"10321e05-17b0-4240-a075-020f910e337f", APIVersion:"apps/v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6c2s6
W0916 23:00:05.582] I0916 23:00:05.487191   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674804-31151", Name:"frontend", UID:"10321e05-17b0-4240-a075-020f910e337f", APIVersion:"apps/v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vl6h5
I0916 23:00:05.683] configmap/test-set-env-config created
I0916 23:00:05.736] Successful
I0916 23:00:05.737] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0916 23:00:05.737] has:not implemented
I0916 23:00:05.826] Successful
I0916 23:00:05.826] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 23:00:05.826] has not:not found
I0916 23:00:05.828] Successful
I0916 23:00:05.829] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 23:00:05.829] has not:pod or type/name must be specified
I0916 23:00:05.936] Successful
I0916 23:00:05.937] message:Error from server (BadRequest): pod frontend-6c2s6 does not have a host assigned
I0916 23:00:05.937] has not:not found
I0916 23:00:05.939] Successful
I0916 23:00:05.940] message:Error from server (BadRequest): pod frontend-6c2s6 does not have a host assigned
I0916 23:00:05.940] has not:pod or type/name must be specified
I0916 23:00:06.015] pod "test-pod" deleted
I0916 23:00:06.091] replicaset.apps "frontend" deleted
I0916 23:00:06.168] configmap "test-set-env-config" deleted
I0916 23:00:06.187] +++ exit code: 0
I0916 23:00:06.223] Recording: run_create_secret_tests
I0916 23:00:06.224] Running command: run_create_secret_tests
I0916 23:00:06.247] 
I0916 23:00:06.249] +++ Running case: test-cmd.run_create_secret_tests 
I0916 23:00:06.252] +++ working dir: /go/src/k8s.io/kubernetes
I0916 23:00:06.254] +++ command: run_create_secret_tests
I0916 23:00:06.352] Successful
I0916 23:00:06.352] message:Error from server (NotFound): secrets "mysecret" not found
I0916 23:00:06.353] has:secrets "mysecret" not found
I0916 23:00:06.516] Successful
I0916 23:00:06.517] message:Error from server (NotFound): secrets "mysecret" not found
I0916 23:00:06.517] has:secrets "mysecret" not found
I0916 23:00:06.517] Successful
I0916 23:00:06.518] message:user-specified
I0916 23:00:06.518] has:user-specified
I0916 23:00:06.587] Successful
I0916 23:00:06.660] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"412b6ccf-8cbe-4e69-9f99-51c1c0c75bb0","resourceVersion":"772","creationTimestamp":"2019-09-16T23:00:06Z"}}
... skipping 2 lines ...
I0916 23:00:06.836] has:uid
I0916 23:00:06.911] Successful
I0916 23:00:06.912] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"412b6ccf-8cbe-4e69-9f99-51c1c0c75bb0","resourceVersion":"773","creationTimestamp":"2019-09-16T23:00:06Z"},"data":{"key1":"config1"}}
I0916 23:00:06.912] has:config1
I0916 23:00:06.978] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"412b6ccf-8cbe-4e69-9f99-51c1c0c75bb0"}}
I0916 23:00:07.063] Successful
I0916 23:00:07.064] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0916 23:00:07.064] has:configmaps "tester-update-cm" not found
I0916 23:00:07.075] +++ exit code: 0
I0916 23:00:07.106] Recording: run_kubectl_create_kustomization_directory_tests
I0916 23:00:07.107] Running command: run_kubectl_create_kustomization_directory_tests
I0916 23:00:07.128] 
I0916 23:00:07.130] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I0916 23:00:09.847] valid-pod   0/1     Pending   0          0s
I0916 23:00:09.847] has:valid-pod
I0916 23:00:10.939] Successful
I0916 23:00:10.939] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 23:00:10.940] valid-pod   0/1     Pending   0          0s
I0916 23:00:10.940] STATUS      REASON          MESSAGE
I0916 23:00:10.940] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 23:00:10.940] has:Timeout exceeded while reading body
I0916 23:00:11.028] Successful
I0916 23:00:11.028] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 23:00:11.029] valid-pod   0/1     Pending   0          2s
I0916 23:00:11.029] has:valid-pod
I0916 23:00:11.100] Successful
I0916 23:00:11.100] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0916 23:00:11.101] has:Invalid timeout value
I0916 23:00:11.182] pod "valid-pod" deleted
I0916 23:00:11.203] +++ exit code: 0
I0916 23:00:11.239] Recording: run_crd_tests
I0916 23:00:11.240] Running command: run_crd_tests
I0916 23:00:11.262] 
... skipping 157 lines ...
I0916 23:00:15.866] foo.company.com/test patched
I0916 23:00:15.954] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0916 23:00:16.032] (Bfoo.company.com/test patched
I0916 23:00:16.122] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0916 23:00:16.210] (Bfoo.company.com/test patched
I0916 23:00:16.299] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0916 23:00:16.444] (B+++ [0916 23:00:16] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0916 23:00:16.506] {
I0916 23:00:16.507]     "apiVersion": "company.com/v1",
I0916 23:00:16.507]     "kind": "Foo",
I0916 23:00:16.507]     "metadata": {
I0916 23:00:16.508]         "annotations": {
I0916 23:00:16.508]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 191 lines ...
I0916 23:00:42.224] (Bnamespace/non-native-resources created
I0916 23:00:42.418] bar.company.com/test created
I0916 23:00:42.532] crd.sh:455: Successful get bars {{len .items}}: 1
I0916 23:00:42.616] (Bnamespace "non-native-resources" deleted
I0916 23:00:47.823] crd.sh:458: Successful get bars {{len .items}}: 0
I0916 23:00:47.982] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0916 23:00:48.083] Error from server (NotFound): namespaces "non-native-resources" not found
I0916 23:00:48.184] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0916 23:00:48.209] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0916 23:00:48.312] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0916 23:00:48.348] +++ exit code: 0
I0916 23:00:48.384] Recording: run_cmd_with_img_tests
I0916 23:00:48.385] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
I0916 23:00:48.696] has:deployment.apps/test1 created
W0916 23:00:48.796] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 23:00:48.797] I0916 23:00:48.681651   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568674848-21721", Name:"test1", UID:"8346a50f-0058-47b7-9a34-d061cc4fcec8", APIVersion:"apps/v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-6cdffdb5b8 to 1
W0916 23:00:48.798] I0916 23:00:48.685880   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674848-21721", Name:"test1-6cdffdb5b8", UID:"32633c1a-e319-4a30-b21e-bc3ff2e8d743", APIVersion:"apps/v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-nxpgl
I0916 23:00:48.898] deployment.apps "test1" deleted
I0916 23:00:48.899] Successful
I0916 23:00:48.899] message:error: Invalid image name "InvalidImageName": invalid reference format
I0916 23:00:48.899] has:error: Invalid image name "InvalidImageName": invalid reference format
I0916 23:00:48.903] +++ exit code: 0
I0916 23:00:48.938] +++ [0916 23:00:48] Testing recursive resources
I0916 23:00:48.944] +++ [0916 23:00:48] Creating namespace namespace-1568674848-749
I0916 23:00:49.015] namespace/namespace-1568674848-749 created
I0916 23:00:49.079] Context "test" modified.
I0916 23:00:49.166] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:00:49.459] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:49.461] (BSuccessful
I0916 23:00:49.462] message:pod/busybox0 created
I0916 23:00:49.462] pod/busybox1 created
I0916 23:00:49.462] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 23:00:49.462] has:error validating data: kind not set
I0916 23:00:49.549] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:49.716] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0916 23:00:49.718] (BSuccessful
I0916 23:00:49.718] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:49.719] has:Object 'Kind' is missing
I0916 23:00:49.804] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:50.063] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0916 23:00:50.065] (BSuccessful
I0916 23:00:50.065] message:pod/busybox0 replaced
I0916 23:00:50.065] pod/busybox1 replaced
I0916 23:00:50.066] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 23:00:50.066] has:error validating data: kind not set
I0916 23:00:50.149] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:50.244] (BSuccessful
I0916 23:00:50.244] message:Name:         busybox0
I0916 23:00:50.244] Namespace:    namespace-1568674848-749
I0916 23:00:50.244] Priority:     0
I0916 23:00:50.244] Node:         <none>
... skipping 159 lines ...
I0916 23:00:50.259] has:Object 'Kind' is missing
I0916 23:00:50.344] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:50.525] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0916 23:00:50.528] (BSuccessful
I0916 23:00:50.528] message:pod/busybox0 annotated
I0916 23:00:50.528] pod/busybox1 annotated
I0916 23:00:50.529] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:50.529] has:Object 'Kind' is missing
I0916 23:00:50.613] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:50.866] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0916 23:00:50.868] (BSuccessful
I0916 23:00:50.869] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0916 23:00:50.869] pod/busybox0 configured
I0916 23:00:50.869] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0916 23:00:50.869] pod/busybox1 configured
I0916 23:00:50.869] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 23:00:50.870] has:error validating data: kind not set
I0916 23:00:50.952] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:00:51.107] (Bdeployment.apps/nginx created
I0916 23:00:51.207] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 23:00:51.297] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0916 23:00:51.456] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0916 23:00:51.459] (BSuccessful
... skipping 42 lines ...
I0916 23:00:51.532] deployment.apps "nginx" deleted
I0916 23:00:51.631] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:51.789] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:51.792] (BSuccessful
I0916 23:00:51.793] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0916 23:00:51.793] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0916 23:00:51.793] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:51.793] has:Object 'Kind' is missing
I0916 23:00:51.885] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:51.974] (BSuccessful
I0916 23:00:51.975] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:51.976] has:busybox0:busybox1:
I0916 23:00:51.977] Successful
I0916 23:00:51.978] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:51.978] has:Object 'Kind' is missing
I0916 23:00:52.071] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:52.166] (Bpod/busybox0 labeled
I0916 23:00:52.166] pod/busybox1 labeled
I0916 23:00:52.167] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:52.266] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0916 23:00:52.270] (BSuccessful
I0916 23:00:52.270] message:pod/busybox0 labeled
I0916 23:00:52.270] pod/busybox1 labeled
I0916 23:00:52.270] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:52.271] has:Object 'Kind' is missing
I0916 23:00:52.370] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:52.454] (Bpod/busybox0 patched
I0916 23:00:52.454] pod/busybox1 patched
I0916 23:00:52.455] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:52.544] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0916 23:00:52.547] (BSuccessful
I0916 23:00:52.547] message:pod/busybox0 patched
I0916 23:00:52.547] pod/busybox1 patched
I0916 23:00:52.547] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:52.547] has:Object 'Kind' is missing
I0916 23:00:52.639] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:52.834] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:00:52.837] (BSuccessful
I0916 23:00:52.837] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 23:00:52.838] pod "busybox0" force deleted
I0916 23:00:52.838] pod "busybox1" force deleted
I0916 23:00:52.838] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 23:00:52.838] has:Object 'Kind' is missing
I0916 23:00:52.926] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:00:53.119] (Breplicationcontroller/busybox0 created
I0916 23:00:53.125] replicationcontroller/busybox1 created
W0916 23:00:53.226] W0916 23:00:48.989793   49450 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 23:00:53.226] E0916 23:00:48.991315   52967 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.226] W0916 23:00:49.113699   49450 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 23:00:53.226] E0916 23:00:49.115170   52967 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.227] W0916 23:00:49.217168   49450 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 23:00:53.227] E0916 23:00:49.218709   52967 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.227] W0916 23:00:49.324348   49450 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 23:00:53.227] E0916 23:00:49.325804   52967 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.228] E0916 23:00:49.993029   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.228] E0916 23:00:50.116718   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.228] E0916 23:00:50.220326   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.229] E0916 23:00:50.328252   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.229] E0916 23:00:50.994930   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.229] I0916 23:00:51.116411   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568674848-749", Name:"nginx", UID:"755b010d-399d-44f3-adb9-834b2f21761b", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W0916 23:00:53.230] E0916 23:00:51.118077   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.230] I0916 23:00:51.120420   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674848-749", Name:"nginx-f87d999f7", UID:"bcba1e66-9d0f-4edc-8085-34babd7be28b", APIVersion:"apps/v1", ResourceVersion:"955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-k5lz8
W0916 23:00:53.230] I0916 23:00:51.123216   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674848-749", Name:"nginx-f87d999f7", UID:"bcba1e66-9d0f-4edc-8085-34babd7be28b", APIVersion:"apps/v1", ResourceVersion:"955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-mn2ft
W0916 23:00:53.231] I0916 23:00:51.123933   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674848-749", Name:"nginx-f87d999f7", UID:"bcba1e66-9d0f-4edc-8085-34babd7be28b", APIVersion:"apps/v1", ResourceVersion:"955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-szqjj
W0916 23:00:53.231] E0916 23:00:51.222023   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.231] E0916 23:00:51.329801   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.232] kubectl convert is DEPRECATED and will be removed in a future version.
W0916 23:00:53.232] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0916 23:00:53.232] E0916 23:00:51.996272   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.232] E0916 23:00:52.119665   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.233] E0916 23:00:52.223676   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.233] E0916 23:00:52.331561   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.233] I0916 23:00:52.716183   52967 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0916 23:00:53.233] E0916 23:00:52.998861   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.234] I0916 23:00:53.120201   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674848-749", Name:"busybox0", UID:"d092e947-5dec-4439-9b03-da351b14c701", APIVersion:"v1", ResourceVersion:"986", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-c9x45
W0916 23:00:53.234] E0916 23:00:53.121151   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.234] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 23:00:53.235] I0916 23:00:53.128183   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674848-749", Name:"busybox1", UID:"95c8c5c8-a474-4c27-8677-fe347e5788b1", APIVersion:"v1", ResourceVersion:"990", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-d8jbt
W0916 23:00:53.235] E0916 23:00:53.224959   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:53.333] E0916 23:00:53.333288   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:00:53.434] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:53.435] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:53.435] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 23:00:53.525] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 23:00:53.730] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0916 23:00:53.822] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0916 23:00:53.824] (BSuccessful
I0916 23:00:53.824] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0916 23:00:53.825] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0916 23:00:53.825] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:53.825] has:Object 'Kind' is missing
I0916 23:00:53.902] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0916 23:00:53.990] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0916 23:00:54.093] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:54.190] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 23:00:54.280] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 23:00:54.473] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 23:00:54.565] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 23:00:54.568] (BSuccessful
I0916 23:00:54.568] message:service/busybox0 exposed
I0916 23:00:54.568] service/busybox1 exposed
I0916 23:00:54.569] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:54.570] has:Object 'Kind' is missing
I0916 23:00:54.665] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:54.776] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 23:00:54.890] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 23:00:55.113] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0916 23:00:55.211] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0916 23:00:55.214] (BSuccessful
I0916 23:00:55.214] message:replicationcontroller/busybox0 scaled
I0916 23:00:55.214] replicationcontroller/busybox1 scaled
I0916 23:00:55.215] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:55.215] has:Object 'Kind' is missing
I0916 23:00:55.303] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:55.481] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:00:55.484] (BSuccessful
I0916 23:00:55.484] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 23:00:55.484] replicationcontroller "busybox0" force deleted
I0916 23:00:55.484] replicationcontroller "busybox1" force deleted
I0916 23:00:55.485] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:55.485] has:Object 'Kind' is missing
I0916 23:00:55.572] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:00:55.745] (Bdeployment.apps/nginx1-deployment created
I0916 23:00:55.752] deployment.apps/nginx0-deployment created
W0916 23:00:55.853] E0916 23:00:54.000640   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:55.854] E0916 23:00:54.122720   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:55.854] E0916 23:00:54.226402   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:55.854] E0916 23:00:54.335158   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:55.855] E0916 23:00:55.002202   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:55.855] I0916 23:00:55.004879   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674848-749", Name:"busybox0", UID:"d092e947-5dec-4439-9b03-da351b14c701", APIVersion:"v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-47bqt
W0916 23:00:55.856] I0916 23:00:55.020659   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674848-749", Name:"busybox1", UID:"95c8c5c8-a474-4c27-8677-fe347e5788b1", APIVersion:"v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-889zf
W0916 23:00:55.856] E0916 23:00:55.124173   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:55.856] E0916 23:00:55.228032   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:55.857] E0916 23:00:55.336541   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:55.857] I0916 23:00:55.750125   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568674848-749", Name:"nginx1-deployment", UID:"3748f3a6-3271-4f76-bb99-2e1d4ffa951c", APIVersion:"apps/v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W0916 23:00:55.857] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 23:00:55.858] I0916 23:00:55.755343   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674848-749", Name:"nginx1-deployment-7bdbbfb5cf", UID:"abdb6ee7-1966-4ee6-a6f6-adab1a71023e", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-5cd4p
W0916 23:00:55.858] I0916 23:00:55.755787   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568674848-749", Name:"nginx0-deployment", UID:"425113fc-e7fd-4a83-9748-f5874ebc2acf", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W0916 23:00:55.858] I0916 23:00:55.760405   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674848-749", Name:"nginx1-deployment-7bdbbfb5cf", UID:"abdb6ee7-1966-4ee6-a6f6-adab1a71023e", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-6n8mj
W0916 23:00:55.859] I0916 23:00:55.763964   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674848-749", Name:"nginx0-deployment-57c6bff7f6", UID:"a5b9aab7-2f7e-410d-81b1-7a508667439f", APIVersion:"apps/v1", ResourceVersion:"1033", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-z4zfg
W0916 23:00:55.859] I0916 23:00:55.764306   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674848-749", Name:"nginx0-deployment-57c6bff7f6", UID:"a5b9aab7-2f7e-410d-81b1-7a508667439f", APIVersion:"apps/v1", ResourceVersion:"1033", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-xsr7g
I0916 23:00:55.960] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0916 23:00:55.960] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0916 23:00:56.168] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0916 23:00:56.171] (BSuccessful
I0916 23:00:56.171] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0916 23:00:56.171] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0916 23:00:56.172] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 23:00:56.172] has:Object 'Kind' is missing
I0916 23:00:56.263] deployment.apps/nginx1-deployment paused
I0916 23:00:56.268] deployment.apps/nginx0-deployment paused
I0916 23:00:56.368] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0916 23:00:56.371] (BSuccessful
I0916 23:00:56.372] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 23:00:56.372] has:Object 'Kind' is missing
I0916 23:00:56.462] deployment.apps/nginx1-deployment resumed
I0916 23:00:56.467] deployment.apps/nginx0-deployment resumed
W0916 23:00:56.568] E0916 23:00:56.004291   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:56.569] E0916 23:00:56.125634   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:56.569] E0916 23:00:56.229787   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:56.570] E0916 23:00:56.338390   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:00:56.670] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0916 23:00:56.671] (BSuccessful
I0916 23:00:56.672] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 23:00:56.672] has:Object 'Kind' is missing
I0916 23:00:56.688] Successful
I0916 23:00:56.688] message:deployment.apps/nginx1-deployment 
I0916 23:00:56.689] REVISION  CHANGE-CAUSE
I0916 23:00:56.689] 1         <none>
I0916 23:00:56.689] 
I0916 23:00:56.689] deployment.apps/nginx0-deployment 
I0916 23:00:56.689] REVISION  CHANGE-CAUSE
I0916 23:00:56.689] 1         <none>
I0916 23:00:56.689] 
I0916 23:00:56.690] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 23:00:56.690] has:nginx0-deployment
I0916 23:00:56.690] Successful
I0916 23:00:56.690] message:deployment.apps/nginx1-deployment 
I0916 23:00:56.691] REVISION  CHANGE-CAUSE
I0916 23:00:56.691] 1         <none>
I0916 23:00:56.691] 
I0916 23:00:56.691] deployment.apps/nginx0-deployment 
I0916 23:00:56.691] REVISION  CHANGE-CAUSE
I0916 23:00:56.691] 1         <none>
I0916 23:00:56.691] 
I0916 23:00:56.691] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 23:00:56.692] has:nginx1-deployment
I0916 23:00:56.693] Successful
I0916 23:00:56.693] message:deployment.apps/nginx1-deployment 
I0916 23:00:56.693] REVISION  CHANGE-CAUSE
I0916 23:00:56.693] 1         <none>
I0916 23:00:56.693] 
I0916 23:00:56.693] deployment.apps/nginx0-deployment 
I0916 23:00:56.693] REVISION  CHANGE-CAUSE
I0916 23:00:56.694] 1         <none>
I0916 23:00:56.694] 
I0916 23:00:56.694] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 23:00:56.694] has:Object 'Kind' is missing
I0916 23:00:56.774] deployment.apps "nginx1-deployment" force deleted
I0916 23:00:56.778] deployment.apps "nginx0-deployment" force deleted
W0916 23:00:56.880] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 23:00:56.881] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0916 23:00:57.007] E0916 23:00:57.006548   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:57.127] E0916 23:00:57.127207   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:57.232] E0916 23:00:57.231332   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:57.340] E0916 23:00:57.339982   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:00:57.877] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:00:58.039] (Breplicationcontroller/busybox0 created
I0916 23:00:58.043] replicationcontroller/busybox1 created
I0916 23:00:58.138] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 23:00:58.234] (BSuccessful
I0916 23:00:58.235] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0916 23:00:58.237] message:no rollbacker has been implemented for "ReplicationController"
I0916 23:00:58.237] no rollbacker has been implemented for "ReplicationController"
I0916 23:00:58.238] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:58.238] has:Object 'Kind' is missing
I0916 23:00:58.324] Successful
I0916 23:00:58.325] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:58.325] error: replicationcontrollers "busybox0" pausing is not supported
I0916 23:00:58.325] error: replicationcontrollers "busybox1" pausing is not supported
I0916 23:00:58.325] has:Object 'Kind' is missing
I0916 23:00:58.327] Successful
I0916 23:00:58.327] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:58.327] error: replicationcontrollers "busybox0" pausing is not supported
I0916 23:00:58.328] error: replicationcontrollers "busybox1" pausing is not supported
I0916 23:00:58.328] has:replicationcontrollers "busybox0" pausing is not supported
I0916 23:00:58.330] Successful
I0916 23:00:58.331] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:58.331] error: replicationcontrollers "busybox0" pausing is not supported
I0916 23:00:58.331] error: replicationcontrollers "busybox1" pausing is not supported
I0916 23:00:58.331] has:replicationcontrollers "busybox1" pausing is not supported
I0916 23:00:58.419] Successful
I0916 23:00:58.420] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:58.420] error: replicationcontrollers "busybox0" resuming is not supported
I0916 23:00:58.420] error: replicationcontrollers "busybox1" resuming is not supported
I0916 23:00:58.421] has:Object 'Kind' is missing
I0916 23:00:58.422] Successful
I0916 23:00:58.423] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:58.423] error: replicationcontrollers "busybox0" resuming is not supported
I0916 23:00:58.423] error: replicationcontrollers "busybox1" resuming is not supported
I0916 23:00:58.423] has:replicationcontrollers "busybox0" resuming is not supported
I0916 23:00:58.424] Successful
I0916 23:00:58.425] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 23:00:58.425] error: replicationcontrollers "busybox0" resuming is not supported
I0916 23:00:58.425] error: replicationcontrollers "busybox1" resuming is not supported
I0916 23:00:58.426] has:replicationcontrollers "busybox0" resuming is not supported
I0916 23:00:58.507] replicationcontroller "busybox0" force deleted
I0916 23:00:58.514] replicationcontroller "busybox1" force deleted
W0916 23:00:58.615] E0916 23:00:58.008257   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:58.615] I0916 23:00:58.042311   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674848-749", Name:"busybox0", UID:"0a1dd547-1d83-4f85-b682-7cd785afec58", APIVersion:"v1", ResourceVersion:"1077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-g9v9v
W0916 23:00:58.616] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 23:00:58.616] I0916 23:00:58.047368   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674848-749", Name:"busybox1", UID:"e62b99e3-45f9-4af7-a9d9-b0e02acdf8cb", APIVersion:"v1", ResourceVersion:"1079", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-d2px5
W0916 23:00:58.616] E0916 23:00:58.128914   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:58.616] E0916 23:00:58.232452   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:58.617] E0916 23:00:58.341411   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:58.617] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 23:00:58.617] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0916 23:00:59.010] E0916 23:00:59.009676   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:59.131] E0916 23:00:59.130527   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:59.234] E0916 23:00:59.233961   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:00:59.343] E0916 23:00:59.343048   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:00:59.523] Recording: run_namespace_tests
I0916 23:00:59.524] Running command: run_namespace_tests
I0916 23:00:59.549] 
I0916 23:00:59.552] +++ Running case: test-cmd.run_namespace_tests 
I0916 23:00:59.555] +++ working dir: /go/src/k8s.io/kubernetes
I0916 23:00:59.557] +++ command: run_namespace_tests
I0916 23:00:59.568] +++ [0916 23:00:59] Testing kubectl(v1:namespaces)
I0916 23:00:59.643] namespace/my-namespace created
I0916 23:00:59.750] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0916 23:00:59.841] (Bnamespace "my-namespace" deleted
W0916 23:01:00.012] E0916 23:01:00.011176   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:00.132] E0916 23:01:00.132186   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:00.236] E0916 23:01:00.235896   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:00.345] E0916 23:01:00.344640   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:01.013] E0916 23:01:01.012898   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:01.134] E0916 23:01:01.134144   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:01.238] E0916 23:01:01.237301   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:01.348] E0916 23:01:01.347255   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:02.015] E0916 23:01:02.014594   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:02.136] E0916 23:01:02.135659   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:02.241] E0916 23:01:02.238702   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:02.349] E0916 23:01:02.348942   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:03.016] E0916 23:01:03.016126   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:03.138] E0916 23:01:03.137300   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:03.240] E0916 23:01:03.239967   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:03.351] E0916 23:01:03.351145   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:04.018] E0916 23:01:04.017676   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:04.139] E0916 23:01:04.138886   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:04.242] E0916 23:01:04.241612   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:04.354] E0916 23:01:04.353299   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:04.946] namespace/my-namespace condition met
I0916 23:01:05.037] Successful
I0916 23:01:05.037] message:Error from server (NotFound): namespaces "my-namespace" not found
I0916 23:01:05.037] has: not found
I0916 23:01:05.118] namespace/my-namespace created
I0916 23:01:05.218] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0916 23:01:05.419] (BSuccessful
I0916 23:01:05.420] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0916 23:01:05.420] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0916 23:01:05.425] namespace "namespace-1568674808-31827" deleted
I0916 23:01:05.426] namespace "namespace-1568674809-11729" deleted
I0916 23:01:05.426] namespace "namespace-1568674811-2177" deleted
I0916 23:01:05.426] namespace "namespace-1568674812-27439" deleted
I0916 23:01:05.426] namespace "namespace-1568674848-21721" deleted
I0916 23:01:05.426] namespace "namespace-1568674848-749" deleted
I0916 23:01:05.426] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0916 23:01:05.427] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0916 23:01:05.427] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0916 23:01:05.427] has:warning: deleting cluster-scoped resources
I0916 23:01:05.427] Successful
I0916 23:01:05.428] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0916 23:01:05.428] namespace "kube-node-lease" deleted
I0916 23:01:05.428] namespace "my-namespace" deleted
I0916 23:01:05.428] namespace "namespace-1568674714-3961" deleted
... skipping 27 lines ...
I0916 23:01:05.433] namespace "namespace-1568674808-31827" deleted
I0916 23:01:05.433] namespace "namespace-1568674809-11729" deleted
I0916 23:01:05.433] namespace "namespace-1568674811-2177" deleted
I0916 23:01:05.434] namespace "namespace-1568674812-27439" deleted
I0916 23:01:05.434] namespace "namespace-1568674848-21721" deleted
I0916 23:01:05.434] namespace "namespace-1568674848-749" deleted
I0916 23:01:05.434] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0916 23:01:05.434] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0916 23:01:05.435] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0916 23:01:05.435] has:namespace "my-namespace" deleted
I0916 23:01:05.536] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0916 23:01:05.615] (Bnamespace/other created
I0916 23:01:05.713] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0916 23:01:05.811] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:01:06.007] (Bpod/valid-pod created
I0916 23:01:06.111] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 23:01:06.215] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 23:01:06.299] (BSuccessful
I0916 23:01:06.300] message:error: a resource cannot be retrieved by name across all namespaces
I0916 23:01:06.301] has:a resource cannot be retrieved by name across all namespaces
I0916 23:01:06.394] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 23:01:06.480] (Bpod "valid-pod" force deleted
I0916 23:01:06.585] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:01:06.661] (Bnamespace "other" deleted
W0916 23:01:06.762] E0916 23:01:05.019242   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:06.763] E0916 23:01:05.140420   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:06.763] E0916 23:01:05.242734   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:06.763] E0916 23:01:05.354436   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:06.764] E0916 23:01:06.020885   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:06.764] I0916 23:01:06.055275   52967 shared_informer.go:197] Waiting for caches to sync for resource quota
W0916 23:01:06.764] I0916 23:01:06.055363   52967 shared_informer.go:204] Caches are synced for resource quota 
W0916 23:01:06.765] E0916 23:01:06.141908   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:06.765] E0916 23:01:06.244519   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:06.765] E0916 23:01:06.355964   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:06.765] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 23:01:06.766] I0916 23:01:06.483986   52967 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0916 23:01:06.766] I0916 23:01:06.484073   52967 shared_informer.go:204] Caches are synced for garbage collector 
W0916 23:01:07.023] E0916 23:01:07.022384   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:07.144] E0916 23:01:07.143685   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:07.246] E0916 23:01:07.246079   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:07.358] E0916 23:01:07.357529   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:08.024] E0916 23:01:08.023908   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:08.146] E0916 23:01:08.145337   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:08.248] E0916 23:01:08.247602   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:08.360] E0916 23:01:08.359797   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:08.615] I0916 23:01:08.614353   52967 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1568674848-749
W0916 23:01:08.620] I0916 23:01:08.619341   52967 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1568674848-749
W0916 23:01:09.026] E0916 23:01:09.025417   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:09.148] E0916 23:01:09.147229   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:09.250] E0916 23:01:09.249258   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:09.362] E0916 23:01:09.361395   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:10.027] E0916 23:01:10.026365   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:10.149] E0916 23:01:10.148706   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:10.251] E0916 23:01:10.250746   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:10.375] E0916 23:01:10.374665   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:11.028] E0916 23:01:11.027467   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:11.150] E0916 23:01:11.149965   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:11.252] E0916 23:01:11.252200   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:11.378] E0916 23:01:11.378275   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:11.770] +++ exit code: 0
I0916 23:01:11.802] Recording: run_secrets_test
I0916 23:01:11.803] Running command: run_secrets_test
I0916 23:01:11.825] 
I0916 23:01:11.828] +++ Running case: test-cmd.run_secrets_test 
I0916 23:01:11.830] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 48 lines ...
I0916 23:01:12.951] (Bsecret "test-secret" deleted
I0916 23:01:13.052] core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:01:13.135] (Bsecret/test-secret created
I0916 23:01:13.240] core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0916 23:01:13.340] (Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
I0916 23:01:13.508] (Bsecret "test-secret" deleted
W0916 23:01:13.609] E0916 23:01:12.029091   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:13.609] I0916 23:01:12.091205   69163 loader.go:375] Config loaded from file:  /tmp/tmp.tDOm9wq1pj/.kube/config
W0916 23:01:13.610] E0916 23:01:12.151253   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:13.610] E0916 23:01:12.253968   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:13.611] E0916 23:01:12.379640   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:13.611] E0916 23:01:13.030339   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:13.611] E0916 23:01:13.152870   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:13.612] E0916 23:01:13.255755   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:13.612] E0916 23:01:13.381358   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:13.713] core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:01:13.717] (Bsecret/test-secret created
I0916 23:01:13.822] core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0916 23:01:13.911] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0916 23:01:13.986] (Bsecret "test-secret" deleted
I0916 23:01:14.066] secret/test-secret created
I0916 23:01:14.156] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0916 23:01:14.244] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0916 23:01:14.320] (Bsecret "test-secret" deleted
W0916 23:01:14.421] E0916 23:01:14.031802   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:14.422] E0916 23:01:14.154209   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:14.422] E0916 23:01:14.257253   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:14.423] E0916 23:01:14.383555   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:14.523] secret/secret-string-data created
I0916 23:01:14.588] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0916 23:01:14.672] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0916 23:01:14.756] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0916 23:01:14.830] (Bsecret "secret-string-data" deleted
I0916 23:01:14.945] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:01:15.100] (Bsecret "test-secret" deleted
I0916 23:01:15.179] namespace "test-secrets" deleted
W0916 23:01:15.280] I0916 23:01:15.030255   52967 namespace_controller.go:171] Namespace has been deleted my-namespace
W0916 23:01:15.281] E0916 23:01:15.033254   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:15.281] E0916 23:01:15.155714   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:15.281] E0916 23:01:15.258718   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:15.385] E0916 23:01:15.385095   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:15.517] I0916 23:01:15.517129   52967 namespace_controller.go:171] Namespace has been deleted kube-node-lease
W0916 23:01:15.528] I0916 23:01:15.527384   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674730-5987
W0916 23:01:15.531] I0916 23:01:15.530998   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674717-8844
W0916 23:01:15.541] I0916 23:01:15.540444   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674735-17870
W0916 23:01:15.541] I0916 23:01:15.540484   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674729-7986
W0916 23:01:15.541] I0916 23:01:15.541570   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674714-3961
... skipping 17 lines ...
W0916 23:01:15.982] I0916 23:01:15.981337   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674789-23549
W0916 23:01:16.001] I0916 23:01:16.000502   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674770-9266
W0916 23:01:16.002] I0916 23:01:16.002431   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674804-24427
W0916 23:01:16.007] I0916 23:01:16.006491   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674798-20249
W0916 23:01:16.028] I0916 23:01:16.027531   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674808-28468
W0916 23:01:16.034] I0916 23:01:16.033353   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674808-31827
W0916 23:01:16.035] E0916 23:01:16.034752   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:16.047] I0916 23:01:16.047005   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674804-31151
W0916 23:01:16.119] I0916 23:01:16.118641   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674809-11729
W0916 23:01:16.130] I0916 23:01:16.130016   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674811-2177
W0916 23:01:16.137] I0916 23:01:16.137098   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674812-27439
W0916 23:01:16.154] I0916 23:01:16.154134   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674848-21721
W0916 23:01:16.157] E0916 23:01:16.157144   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:16.203] I0916 23:01:16.202487   52967 namespace_controller.go:171] Namespace has been deleted namespace-1568674848-749
W0916 23:01:16.262] E0916 23:01:16.261438   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:16.387] E0916 23:01:16.386965   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:16.745] I0916 23:01:16.744533   52967 namespace_controller.go:171] Namespace has been deleted other
W0916 23:01:17.037] E0916 23:01:17.036278   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:17.159] E0916 23:01:17.158683   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:17.263] E0916 23:01:17.262936   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:17.389] E0916 23:01:17.388559   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:18.038] E0916 23:01:18.037926   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:18.161] E0916 23:01:18.160328   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:18.264] E0916 23:01:18.264278   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:18.390] E0916 23:01:18.390066   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:19.040] E0916 23:01:19.039602   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:19.162] E0916 23:01:19.161884   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:19.266] E0916 23:01:19.265556   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:19.393] E0916 23:01:19.392900   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:20.041] E0916 23:01:20.041124   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:20.164] E0916 23:01:20.163374   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:20.267] E0916 23:01:20.266432   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:20.367] +++ exit code: 0
I0916 23:01:20.368] Recording: run_configmap_tests
I0916 23:01:20.368] Running command: run_configmap_tests
I0916 23:01:20.369] 
I0916 23:01:20.369] +++ Running case: test-cmd.run_configmap_tests 
I0916 23:01:20.369] +++ working dir: /go/src/k8s.io/kubernetes
I0916 23:01:20.369] +++ command: run_configmap_tests
I0916 23:01:20.370] +++ [0916 23:01:20] Creating namespace namespace-1568674880-18964
I0916 23:01:20.438] namespace/namespace-1568674880-18964 created
I0916 23:01:20.517] Context "test" modified.
I0916 23:01:20.524] +++ [0916 23:01:20] Testing configmaps
W0916 23:01:20.625] E0916 23:01:20.394346   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:20.726] configmap/test-configmap created
I0916 23:01:20.814] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0916 23:01:20.890] (Bconfigmap "test-configmap" deleted
I0916 23:01:20.994] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0916 23:01:21.065] (Bnamespace/test-configmaps created
I0916 23:01:21.162] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0916 23:01:21.529] configmap/test-binary-configmap created
I0916 23:01:21.630] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0916 23:01:21.715] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0916 23:01:21.962] (Bconfigmap "test-configmap" deleted
I0916 23:01:22.048] configmap "test-binary-configmap" deleted
I0916 23:01:22.144] namespace "test-configmaps" deleted
W0916 23:01:22.245] E0916 23:01:21.042621   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:22.245] E0916 23:01:21.164739   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:22.246] E0916 23:01:21.268103   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:22.246] E0916 23:01:21.395687   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:22.246] E0916 23:01:22.043953   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:22.247] E0916 23:01:22.166426   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:22.270] E0916 23:01:22.269624   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:22.398] E0916 23:01:22.397380   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:23.046] E0916 23:01:23.045635   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:23.168] E0916 23:01:23.168075   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:23.274] E0916 23:01:23.273488   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:23.401] E0916 23:01:23.401116   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:24.047] E0916 23:01:24.047082   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:24.170] E0916 23:01:24.169633   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:24.275] E0916 23:01:24.275243   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:24.403] E0916 23:01:24.402583   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:25.049] E0916 23:01:25.048579   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:25.172] E0916 23:01:25.171395   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:25.264] I0916 23:01:25.263960   52967 namespace_controller.go:171] Namespace has been deleted test-secrets
W0916 23:01:25.277] E0916 23:01:25.276754   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:25.404] E0916 23:01:25.403974   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:26.050] E0916 23:01:26.050133   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:26.174] E0916 23:01:26.173256   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:26.279] E0916 23:01:26.278577   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:26.406] E0916 23:01:26.405466   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:27.052] E0916 23:01:27.051601   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:27.176] E0916 23:01:27.175051   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:27.276] +++ exit code: 0
I0916 23:01:27.297] Recording: run_client_config_tests
I0916 23:01:27.298] Running command: run_client_config_tests
I0916 23:01:27.322] 
I0916 23:01:27.324] +++ Running case: test-cmd.run_client_config_tests 
I0916 23:01:27.328] +++ working dir: /go/src/k8s.io/kubernetes
I0916 23:01:27.330] +++ command: run_client_config_tests
I0916 23:01:27.342] +++ [0916 23:01:27] Creating namespace namespace-1568674887-28471
I0916 23:01:27.419] namespace/namespace-1568674887-28471 created
I0916 23:01:27.495] Context "test" modified.
I0916 23:01:27.501] +++ [0916 23:01:27] Testing client config
I0916 23:01:27.578] Successful
I0916 23:01:27.579] message:error: stat missing: no such file or directory
I0916 23:01:27.579] has:missing: no such file or directory
I0916 23:01:27.655] Successful
I0916 23:01:27.655] message:error: stat missing: no such file or directory
I0916 23:01:27.656] has:missing: no such file or directory
I0916 23:01:27.737] Successful
I0916 23:01:27.737] message:error: stat missing: no such file or directory
I0916 23:01:27.737] has:missing: no such file or directory
I0916 23:01:27.808] Successful
I0916 23:01:27.809] message:Error in configuration: context was not found for specified context: missing-context
I0916 23:01:27.809] has:context was not found for specified context: missing-context
I0916 23:01:27.885] Successful
I0916 23:01:27.885] message:error: no server found for cluster "missing-cluster"
I0916 23:01:27.885] has:no server found for cluster "missing-cluster"
I0916 23:01:27.975] Successful
I0916 23:01:27.975] message:error: auth info "missing-user" does not exist
I0916 23:01:27.975] has:auth info "missing-user" does not exist
W0916 23:01:28.076] E0916 23:01:27.280145   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:28.076] E0916 23:01:27.407245   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:28.077] E0916 23:01:28.053130   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:28.177] E0916 23:01:28.176467   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:28.277] Successful
I0916 23:01:28.278] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0916 23:01:28.278] has:error loading config file
I0916 23:01:28.278] Successful
I0916 23:01:28.278] message:error: stat missing-config: no such file or directory
I0916 23:01:28.279] has:no such file or directory
I0916 23:01:28.279] +++ exit code: 0
I0916 23:01:28.279] Recording: run_service_accounts_tests
I0916 23:01:28.279] Running command: run_service_accounts_tests
I0916 23:01:28.279] 
I0916 23:01:28.279] +++ Running case: test-cmd.run_service_accounts_tests 
I0916 23:01:28.280] +++ working dir: /go/src/k8s.io/kubernetes
I0916 23:01:28.280] +++ command: run_service_accounts_tests
I0916 23:01:28.283] +++ [0916 23:01:28] Creating namespace namespace-1568674888-2156
I0916 23:01:28.353] namespace/namespace-1568674888-2156 created
I0916 23:01:28.421] Context "test" modified.
I0916 23:01:28.429] +++ [0916 23:01:28] Testing service accounts
W0916 23:01:28.530] E0916 23:01:28.281833   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:28.531] E0916 23:01:28.409005   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:28.631] core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
I0916 23:01:28.632] (Bnamespace/test-service-accounts created
I0916 23:01:28.732] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0916 23:01:28.820] (Bserviceaccount/test-service-account created
I0916 23:01:28.915] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0916 23:01:28.994] (Bserviceaccount "test-service-account" deleted
I0916 23:01:29.082] namespace "test-service-accounts" deleted
W0916 23:01:29.183] E0916 23:01:29.054572   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:29.183] E0916 23:01:29.178108   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:29.284] E0916 23:01:29.283787   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:29.411] E0916 23:01:29.410538   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:30.056] E0916 23:01:30.056075   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:30.180] E0916 23:01:30.179637   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:30.286] E0916 23:01:30.286136   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:30.412] E0916 23:01:30.412017   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:31.058] E0916 23:01:31.057753   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:31.181] E0916 23:01:31.181118   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:31.288] E0916 23:01:31.287736   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:31.414] E0916 23:01:31.413482   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:32.060] E0916 23:01:32.059458   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:32.183] E0916 23:01:32.182757   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:32.238] I0916 23:01:32.237744   52967 namespace_controller.go:171] Namespace has been deleted test-configmaps
W0916 23:01:32.290] E0916 23:01:32.289348   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:32.415] E0916 23:01:32.414941   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:33.061] E0916 23:01:33.061191   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:33.185] E0916 23:01:33.184249   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:33.291] E0916 23:01:33.291149   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:33.417] E0916 23:01:33.416551   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:34.063] E0916 23:01:34.062729   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:34.186] E0916 23:01:34.185772   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:34.287] +++ exit code: 0
I0916 23:01:34.287] Recording: run_job_tests
I0916 23:01:34.287] Running command: run_job_tests
I0916 23:01:34.287] 
I0916 23:01:34.287] +++ Running case: test-cmd.run_job_tests 
I0916 23:01:34.287] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0916 23:01:35.076] Labels:                        run=pi
I0916 23:01:35.076] Annotations:                   <none>
I0916 23:01:35.076] Schedule:                      59 23 31 2 *
I0916 23:01:35.076] Concurrency Policy:            Allow
I0916 23:01:35.077] Suspend:                       False
I0916 23:01:35.077] Successful Job History Limit:  3
I0916 23:01:35.077] Failed Job History Limit:      1
I0916 23:01:35.077] Starting Deadline Seconds:     <unset>
I0916 23:01:35.078] Selector:                      <unset>
I0916 23:01:35.078] Parallelism:                   <unset>
I0916 23:01:35.078] Completions:                   <unset>
I0916 23:01:35.078] Pod Template:
I0916 23:01:35.079]   Labels:  run=pi
... skipping 32 lines ...
I0916 23:01:35.609]                 run=pi
I0916 23:01:35.609] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0916 23:01:35.609] Controlled By:  CronJob/pi
I0916 23:01:35.609] Parallelism:    1
I0916 23:01:35.610] Completions:    1
I0916 23:01:35.610] Start Time:     Mon, 16 Sep 2019 23:01:35 +0000
I0916 23:01:35.610] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0916 23:01:35.610] Pod Template:
I0916 23:01:35.611]   Labels:  controller-uid=03a5318a-2aee-47d8-8443-7e1b7f48182f
I0916 23:01:35.611]            job-name=test-job
I0916 23:01:35.611]            run=pi
I0916 23:01:35.611]   Containers:
I0916 23:01:35.612]    pi:
... skipping 15 lines ...
I0916 23:01:35.617]   Type    Reason            Age   From            Message
I0916 23:01:35.617]   ----    ------            ----  ----            -------
I0916 23:01:35.617]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-w2z92
I0916 23:01:35.692] job.batch "test-job" deleted
I0916 23:01:35.784] cronjob.batch "pi" deleted
I0916 23:01:35.873] namespace "test-jobs" deleted
W0916 23:01:35.974] E0916 23:01:34.292833   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:35.975] E0916 23:01:34.419183   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:35.975] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 23:01:35.975] E0916 23:01:35.063954   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:35.975] E0916 23:01:35.187057   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:35.976] E0916 23:01:35.294253   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:35.976] I0916 23:01:35.340390   52967 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"03a5318a-2aee-47d8-8443-7e1b7f48182f", APIVersion:"batch/v1", ResourceVersion:"1399", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-w2z92
W0916 23:01:35.976] E0916 23:01:35.420695   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:36.070] E0916 23:01:36.069364   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:36.189] E0916 23:01:36.188702   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:36.296] E0916 23:01:36.295920   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:36.423] E0916 23:01:36.422394   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:37.072] E0916 23:01:37.071378   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:37.190] E0916 23:01:37.190276   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:37.298] E0916 23:01:37.297395   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:37.424] E0916 23:01:37.423895   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:38.073] E0916 23:01:38.073052   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:38.192] E0916 23:01:38.191949   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:38.299] E0916 23:01:38.299058   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:38.426] E0916 23:01:38.425359   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:39.075] E0916 23:01:39.074427   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:39.165] I0916 23:01:39.165133   52967 namespace_controller.go:171] Namespace has been deleted test-service-accounts
W0916 23:01:39.194] E0916 23:01:39.193378   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:39.301] E0916 23:01:39.300913   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:39.427] E0916 23:01:39.426731   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:40.077] E0916 23:01:40.076429   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:40.195] E0916 23:01:40.195105   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:40.303] E0916 23:01:40.302338   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:40.428] E0916 23:01:40.428125   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:40.991] +++ exit code: 0
I0916 23:01:41.028] Recording: run_create_job_tests
I0916 23:01:41.029] Running command: run_create_job_tests
I0916 23:01:41.057] 
I0916 23:01:41.060] +++ Running case: test-cmd.run_create_job_tests 
I0916 23:01:41.063] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 22 lines ...
I0916 23:01:42.343] +++ working dir: /go/src/k8s.io/kubernetes
I0916 23:01:42.346] +++ command: run_pod_templates_tests
I0916 23:01:42.358] +++ [0916 23:01:42] Creating namespace namespace-1568674902-14790
I0916 23:01:42.459] namespace/namespace-1568674902-14790 created
I0916 23:01:42.540] Context "test" modified.
I0916 23:01:42.549] +++ [0916 23:01:42] Testing pod templates
W0916 23:01:42.650] E0916 23:01:41.077888   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:42.650] E0916 23:01:41.196593   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:42.650] E0916 23:01:41.304134   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:42.651] I0916 23:01:41.315040   52967 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568674901-14753", Name:"test-job", UID:"09d81ac9-e93e-42eb-897f-c7831e327d7f", APIVersion:"batch/v1", ResourceVersion:"1419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-g4sqc
W0916 23:01:42.653] E0916 23:01:41.430036   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:42.653] I0916 23:01:41.573859   52967 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568674901-14753", Name:"test-job-pi", UID:"fd0d7125-160b-4731-a37f-962bab321b7a", APIVersion:"batch/v1", ResourceVersion:"1427", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-btr7m
W0916 23:01:42.653] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 23:01:42.654] I0916 23:01:41.944867   52967 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568674901-14753", Name:"my-pi", UID:"5cc9adfc-ef4e-484a-9e98-af52b8f7c9f2", APIVersion:"batch/v1", ResourceVersion:"1436", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-785fl
W0916 23:01:42.654] E0916 23:01:42.079501   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:42.654] E0916 23:01:42.198287   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:42.655] E0916 23:01:42.308684   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:42.655] E0916 23:01:42.431873   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:42.756] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:01:42.836] (Bpodtemplate/nginx created
I0916 23:01:42.934] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 23:01:43.017] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0916 23:01:43.018] nginx   nginx        nginx    name=nginx
W0916 23:01:43.118] I0916 23:01:42.828006   49450 controller.go:606] quota admission added evaluator for: podtemplates
W0916 23:01:43.119] E0916 23:01:43.081505   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:43.201] E0916 23:01:43.200235   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:43.301] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 23:01:43.302] (Bpodtemplate "nginx" deleted
I0916 23:01:43.389] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:01:43.406] (B+++ exit code: 0
I0916 23:01:43.440] Recording: run_service_tests
I0916 23:01:43.440] Running command: run_service_tests
... skipping 65 lines ...
I0916 23:01:44.337] Port:              <unset>  6379/TCP
I0916 23:01:44.337] TargetPort:        6379/TCP
I0916 23:01:44.337] Endpoints:         <none>
I0916 23:01:44.338] Session Affinity:  None
I0916 23:01:44.338] Events:            <none>
I0916 23:01:44.338] (B
W0916 23:01:44.438] E0916 23:01:43.310518   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:44.439] E0916 23:01:43.433657   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:44.439] E0916 23:01:44.083674   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:44.440] E0916 23:01:44.201749   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:44.440] E0916 23:01:44.312932   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:44.440] E0916 23:01:44.435234   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:44.541] Successful describe services:
I0916 23:01:44.541] Name:              kubernetes
I0916 23:01:44.541] Namespace:         default
I0916 23:01:44.541] Labels:            component=apiserver
I0916 23:01:44.542]                    provider=kubernetes
I0916 23:01:44.542] Annotations:       <none>
... skipping 178 lines ...
I0916 23:01:45.446]   selector:
I0916 23:01:45.446]     role: padawan
I0916 23:01:45.446]   sessionAffinity: None
I0916 23:01:45.446]   type: ClusterIP
I0916 23:01:45.446] status:
I0916 23:01:45.446]   loadBalancer: {}
W0916 23:01:45.547] E0916 23:01:45.085701   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:45.547] E0916 23:01:45.204000   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:45.548] E0916 23:01:45.314601   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:45.548] E0916 23:01:45.436611   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:45.548] error: you must specify resources by --filename when --local is set.
W0916 23:01:45.548] Example resource specifications include:
W0916 23:01:45.548]    '-f rsrc.yaml'
W0916 23:01:45.548]    '--filename=rsrc.json'
I0916 23:01:45.649] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0916 23:01:45.774] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0916 23:01:45.856] (Bservice "redis-master" deleted
... skipping 9 lines ...
I0916 23:01:47.020] (Bservice "redis-master" deleted
I0916 23:01:47.107] service "service-v1-test" deleted
I0916 23:01:47.207] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 23:01:47.297] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 23:01:47.465] (Bservice/redis-master created
W0916 23:01:47.566] I0916 23:01:45.964973   52967 namespace_controller.go:171] Namespace has been deleted test-jobs
W0916 23:01:47.566] E0916 23:01:46.087198   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:47.567] E0916 23:01:46.205385   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:47.567] E0916 23:01:46.316135   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:47.567] E0916 23:01:46.438441   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:47.568] E0916 23:01:47.088765   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:47.568] E0916 23:01:47.207156   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:47.568] E0916 23:01:47.318178   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:47.568] E0916 23:01:47.439816   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:47.669] service/redis-slave created
I0916 23:01:47.742] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0916 23:01:47.825] (BSuccessful
I0916 23:01:47.826] message:NAME           RSRC
I0916 23:01:47.826] kubernetes     145
I0916 23:01:47.826] redis-master   1470
... skipping 81 lines ...
I0916 23:01:52.675]   Volumes:	<none>
I0916 23:01:52.675]  (dry run)
I0916 23:01:52.772] apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0916 23:01:52.872] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0916 23:01:52.970] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0916 23:01:53.079] (Bdaemonset.apps/bind rolled back
W0916 23:01:53.180] E0916 23:01:48.090006   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.180] E0916 23:01:48.208917   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.180] E0916 23:01:48.320151   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.181] E0916 23:01:48.440995   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.181] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 23:01:53.181] I0916 23:01:48.796616   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"a87cb059-ee78-45f0-a8a0-39f1b3d7212f", APIVersion:"apps/v1", ResourceVersion:"1485", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W0916 23:01:53.181] I0916 23:01:48.802357   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"d6b8868f-eec0-4c1c-bf82-f1dc0d99efa9", APIVersion:"apps/v1", ResourceVersion:"1486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-8p2t2
W0916 23:01:53.182] I0916 23:01:48.805825   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"d6b8868f-eec0-4c1c-bf82-f1dc0d99efa9", APIVersion:"apps/v1", ResourceVersion:"1486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-sbk4t
W0916 23:01:53.182] E0916 23:01:49.091635   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.182] E0916 23:01:49.210326   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.182] E0916 23:01:49.321737   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.183] E0916 23:01:49.442728   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.183] I0916 23:01:49.864187   49450 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0916 23:01:53.183] I0916 23:01:49.874819   49450 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0916 23:01:53.183] E0916 23:01:50.093256   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.183] E0916 23:01:50.212023   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.184] E0916 23:01:50.323335   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.184] E0916 23:01:50.444453   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.184] E0916 23:01:51.094253   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.184] E0916 23:01:51.213475   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.185] E0916 23:01:51.324818   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.185] E0916 23:01:51.445752   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.185] E0916 23:01:52.095771   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.185] E0916 23:01:52.215128   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.185] E0916 23:01:52.326451   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.186] E0916 23:01:52.447685   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.186] E0916 23:01:53.097790   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:53.217] E0916 23:01:53.217019   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:53.318] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 23:01:53.319] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 23:01:53.411] (BSuccessful
I0916 23:01:53.411] message:error: unable to find specified revision 1000000 in history
I0916 23:01:53.411] has:unable to find specified revision
I0916 23:01:53.503] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 23:01:53.600] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 23:01:53.698] (Bdaemonset.apps/bind rolled back
I0916 23:01:53.794] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0916 23:01:53.884] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0916 23:01:55.218] Namespace:    namespace-1568674914-5705
I0916 23:01:55.218] Selector:     app=guestbook,tier=frontend
I0916 23:01:55.219] Labels:       app=guestbook
I0916 23:01:55.219]               tier=frontend
I0916 23:01:55.219] Annotations:  <none>
I0916 23:01:55.219] Replicas:     3 current / 3 desired
I0916 23:01:55.220] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 23:01:55.220] Pod Template:
I0916 23:01:55.220]   Labels:  app=guestbook
I0916 23:01:55.220]            tier=frontend
I0916 23:01:55.221]   Containers:
I0916 23:01:55.221]    php-redis:
I0916 23:01:55.221]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 23:01:55.323] Namespace:    namespace-1568674914-5705
I0916 23:01:55.324] Selector:     app=guestbook,tier=frontend
I0916 23:01:55.324] Labels:       app=guestbook
I0916 23:01:55.324]               tier=frontend
I0916 23:01:55.324] Annotations:  <none>
I0916 23:01:55.324] Replicas:     3 current / 3 desired
I0916 23:01:55.324] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 23:01:55.324] Pod Template:
I0916 23:01:55.325]   Labels:  app=guestbook
I0916 23:01:55.325]            tier=frontend
I0916 23:01:55.325]   Containers:
I0916 23:01:55.325]    php-redis:
I0916 23:01:55.325]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0916 23:01:55.429] Namespace:    namespace-1568674914-5705
I0916 23:01:55.429] Selector:     app=guestbook,tier=frontend
I0916 23:01:55.429] Labels:       app=guestbook
I0916 23:01:55.429]               tier=frontend
I0916 23:01:55.429] Annotations:  <none>
I0916 23:01:55.429] Replicas:     3 current / 3 desired
I0916 23:01:55.429] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 23:01:55.430] Pod Template:
I0916 23:01:55.430]   Labels:  app=guestbook
I0916 23:01:55.430]            tier=frontend
I0916 23:01:55.430]   Containers:
I0916 23:01:55.430]    php-redis:
I0916 23:01:55.430]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0916 23:01:55.536] Namespace:    namespace-1568674914-5705
I0916 23:01:55.537] Selector:     app=guestbook,tier=frontend
I0916 23:01:55.537] Labels:       app=guestbook
I0916 23:01:55.537]               tier=frontend
I0916 23:01:55.537] Annotations:  <none>
I0916 23:01:55.537] Replicas:     3 current / 3 desired
I0916 23:01:55.537] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 23:01:55.537] Pod Template:
I0916 23:01:55.538]   Labels:  app=guestbook
I0916 23:01:55.538]            tier=frontend
I0916 23:01:55.538]   Containers:
I0916 23:01:55.538]    php-redis:
I0916 23:01:55.538]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0916 23:01:55.540]   Type    Reason            Age   From                    Message
I0916 23:01:55.540]   ----    ------            ----  ----                    -------
I0916 23:01:55.540]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-h2xdq
I0916 23:01:55.540]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-kkzkr
I0916 23:01:55.540]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-2gtnq
I0916 23:01:55.540] (B
W0916 23:01:55.641] E0916 23:01:53.328554   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:55.641] E0916 23:01:53.449355   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:55.646] E0916 23:01:53.713457   52967 daemon_controller.go:302] namespace-1568674911-27858/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1568674911-27858", SelfLink:"/apis/apps/v1/namespaces/namespace-1568674911-27858/daemonsets/bind", UID:"3699d557-925b-4fbf-a13e-cb3e168d86ec", ResourceVersion:"1555", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704271711, loc:(*time.Location)(0x7751f40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1568674911-27858\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00115bec0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002310108), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028b6a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc00115bee0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00091b570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00231015c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0916 23:01:55.646] E0916 23:01:54.100107   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:55.647] E0916 23:01:54.218198   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:55.647] E0916 23:01:54.330267   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:55.647] E0916 23:01:54.451042   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:55.648] I0916 23:01:54.526389   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"3b7bbb85-413f-41ce-87ba-fcd63d1f6c93", APIVersion:"v1", ResourceVersion:"1563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5g9ps
W0916 23:01:55.648] I0916 23:01:54.530079   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"3b7bbb85-413f-41ce-87ba-fcd63d1f6c93", APIVersion:"v1", ResourceVersion:"1563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-84jqs
W0916 23:01:55.648] I0916 23:01:54.531815   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"3b7bbb85-413f-41ce-87ba-fcd63d1f6c93", APIVersion:"v1", ResourceVersion:"1563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bcqk2
W0916 23:01:55.649] I0916 23:01:54.967870   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"eb713417-0fb7-4f40-a73b-e4cb96365b38", APIVersion:"v1", ResourceVersion:"1579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h2xdq
W0916 23:01:55.649] I0916 23:01:54.971676   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"eb713417-0fb7-4f40-a73b-e4cb96365b38", APIVersion:"v1", ResourceVersion:"1579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kkzkr
W0916 23:01:55.650] I0916 23:01:54.971718   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"eb713417-0fb7-4f40-a73b-e4cb96365b38", APIVersion:"v1", ResourceVersion:"1579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2gtnq
W0916 23:01:55.650] E0916 23:01:55.101597   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:55.650] E0916 23:01:55.219884   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:55.651] E0916 23:01:55.331683   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:55.651] E0916 23:01:55.452455   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:55.751] Successful describe rc:
I0916 23:01:55.752] Name:         frontend
I0916 23:01:55.752] Namespace:    namespace-1568674914-5705
I0916 23:01:55.752] Selector:     app=guestbook,tier=frontend
I0916 23:01:55.752] Labels:       app=guestbook
I0916 23:01:55.753]               tier=frontend
I0916 23:01:55.753] Annotations:  <none>
I0916 23:01:55.753] Replicas:     3 current / 3 desired
I0916 23:01:55.753] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 23:01:55.753] Pod Template:
I0916 23:01:55.753]   Labels:  app=guestbook
I0916 23:01:55.753]            tier=frontend
I0916 23:01:55.754]   Containers:
I0916 23:01:55.754]    php-redis:
I0916 23:01:55.754]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 23:01:55.781] Namespace:    namespace-1568674914-5705
I0916 23:01:55.782] Selector:     app=guestbook,tier=frontend
I0916 23:01:55.782] Labels:       app=guestbook
I0916 23:01:55.782]               tier=frontend
I0916 23:01:55.782] Annotations:  <none>
I0916 23:01:55.782] Replicas:     3 current / 3 desired
I0916 23:01:55.782] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 23:01:55.782] Pod Template:
I0916 23:01:55.783]   Labels:  app=guestbook
I0916 23:01:55.783]            tier=frontend
I0916 23:01:55.783]   Containers:
I0916 23:01:55.783]    php-redis:
I0916 23:01:55.783]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 23:01:55.891] Namespace:    namespace-1568674914-5705
I0916 23:01:55.891] Selector:     app=guestbook,tier=frontend
I0916 23:01:55.891] Labels:       app=guestbook
I0916 23:01:55.891]               tier=frontend
I0916 23:01:55.891] Annotations:  <none>
I0916 23:01:55.892] Replicas:     3 current / 3 desired
I0916 23:01:55.892] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 23:01:55.892] Pod Template:
I0916 23:01:55.892]   Labels:  app=guestbook
I0916 23:01:55.892]            tier=frontend
I0916 23:01:55.892]   Containers:
I0916 23:01:55.892]    php-redis:
I0916 23:01:55.893]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0916 23:01:56.001] Namespace:    namespace-1568674914-5705
I0916 23:01:56.001] Selector:     app=guestbook,tier=frontend
I0916 23:01:56.001] Labels:       app=guestbook
I0916 23:01:56.001]               tier=frontend
I0916 23:01:56.001] Annotations:  <none>
I0916 23:01:56.001] Replicas:     3 current / 3 desired
I0916 23:01:56.002] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 23:01:56.002] Pod Template:
I0916 23:01:56.002]   Labels:  app=guestbook
I0916 23:01:56.002]            tier=frontend
I0916 23:01:56.002]   Containers:
I0916 23:01:56.002]    php-redis:
I0916 23:01:56.002]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 19 lines ...
I0916 23:01:56.535] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
I0916 23:01:56.622] (Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0916 23:01:56.704] (Breplicationcontroller/frontend scaled
I0916 23:01:56.803] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0916 23:01:56.895] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0916 23:01:56.974] (Breplicationcontroller/frontend scaled
W0916 23:01:57.075] E0916 23:01:56.103050   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:57.075] I0916 23:01:56.182910   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"eb713417-0fb7-4f40-a73b-e4cb96365b38", APIVersion:"v1", ResourceVersion:"1589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-h2xdq
W0916 23:01:57.076] E0916 23:01:56.221378   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:57.076] E0916 23:01:56.333373   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:57.076] error: Expected replicas to be 3, was 2
W0916 23:01:57.077] E0916 23:01:56.454053   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:57.077] I0916 23:01:56.707753   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"eb713417-0fb7-4f40-a73b-e4cb96365b38", APIVersion:"v1", ResourceVersion:"1595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8thcv
W0916 23:01:57.078] I0916 23:01:56.980285   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"eb713417-0fb7-4f40-a73b-e4cb96365b38", APIVersion:"v1", ResourceVersion:"1600", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-8thcv
W0916 23:01:57.105] E0916 23:01:57.104671   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:57.206] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0916 23:01:57.206] (Breplicationcontroller "frontend" deleted
W0916 23:01:57.307] E0916 23:01:57.223108   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:57.335] E0916 23:01:57.334914   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:57.372] I0916 23:01:57.371064   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"redis-master", UID:"0b7beae9-dbfb-4ced-ab79-947dbf9d7d1a", APIVersion:"v1", ResourceVersion:"1611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-jf6zx
W0916 23:01:57.456] E0916 23:01:57.455338   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:57.547] I0916 23:01:57.546303   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"redis-slave", UID:"fe313ed2-17ed-499e-8e8e-6420999a10fc", APIVersion:"v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-4ptth
W0916 23:01:57.550] I0916 23:01:57.549481   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"redis-slave", UID:"fe313ed2-17ed-499e-8e8e-6420999a10fc", APIVersion:"v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-kk5t5
W0916 23:01:57.642] I0916 23:01:57.641411   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"redis-master", UID:"0b7beae9-dbfb-4ced-ab79-947dbf9d7d1a", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-wmwmv
W0916 23:01:57.645] I0916 23:01:57.644955   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"redis-master", UID:"0b7beae9-dbfb-4ced-ab79-947dbf9d7d1a", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-n562q
W0916 23:01:57.647] I0916 23:01:57.645362   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"redis-master", UID:"0b7beae9-dbfb-4ced-ab79-947dbf9d7d1a", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-dzp6s
W0916 23:01:57.647] I0916 23:01:57.646085   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"redis-slave", UID:"fe313ed2-17ed-499e-8e8e-6420999a10fc", APIVersion:"v1", ResourceVersion:"1626", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-jxcll
... skipping 12 lines ...
I0916 23:01:58.358] (Bdeployment.apps "nginx-deployment" deleted
I0916 23:01:58.457] Successful
I0916 23:01:58.457] message:service/expose-test-deployment exposed
I0916 23:01:58.458] has:service/expose-test-deployment exposed
I0916 23:01:58.539] service "expose-test-deployment" deleted
I0916 23:01:58.633] Successful
I0916 23:01:58.634] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0916 23:01:58.634] See 'kubectl expose -h' for help and examples
I0916 23:01:58.634] has:invalid deployment: no selectors
W0916 23:01:58.735] I0916 23:01:58.092376   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment", UID:"a7936de4-45a7-4615-a366-e54996c9c562", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0916 23:01:58.736] I0916 23:01:58.095733   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment-6986c7bc94", UID:"5d51eff8-5694-46a9-94ea-7e64a9f527a8", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-4wvdl
W0916 23:01:58.736] I0916 23:01:58.099240   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment-6986c7bc94", UID:"5d51eff8-5694-46a9-94ea-7e64a9f527a8", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-mp7k8
W0916 23:01:58.737] I0916 23:01:58.099465   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment-6986c7bc94", UID:"5d51eff8-5694-46a9-94ea-7e64a9f527a8", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-v5nx8
W0916 23:01:58.737] E0916 23:01:58.105882   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:58.737] I0916 23:01:58.190990   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment", UID:"a7936de4-45a7-4615-a366-e54996c9c562", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W0916 23:01:58.738] I0916 23:01:58.198246   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment-6986c7bc94", UID:"5d51eff8-5694-46a9-94ea-7e64a9f527a8", APIVersion:"apps/v1", ResourceVersion:"1673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-4wvdl
W0916 23:01:58.738] I0916 23:01:58.198402   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment-6986c7bc94", UID:"5d51eff8-5694-46a9-94ea-7e64a9f527a8", APIVersion:"apps/v1", ResourceVersion:"1673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-v5nx8
W0916 23:01:58.738] E0916 23:01:58.225104   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:58.739] E0916 23:01:58.336382   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:58.739] E0916 23:01:58.456476   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:58.799] I0916 23:01:58.798896   52967 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment", UID:"54059a3c-4d78-467d-bcb8-6601157a6923", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0916 23:01:58.802] I0916 23:01:58.801970   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment-6986c7bc94", UID:"37ac32f9-34c9-4900-ab58-46e74530f071", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-rrfn5
W0916 23:01:58.805] I0916 23:01:58.805007   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment-6986c7bc94", UID:"37ac32f9-34c9-4900-ab58-46e74530f071", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-slw48
W0916 23:01:58.806] I0916 23:01:58.805288   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568674914-5705", Name:"nginx-deployment-6986c7bc94", UID:"37ac32f9-34c9-4900-ab58-46e74530f071", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-mcnrp
I0916 23:01:58.906] deployment.apps/nginx-deployment created
I0916 23:01:58.907] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0916 23:01:58.995] (Bservice/nginx-deployment exposed
I0916 23:01:59.090] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0916 23:01:59.167] (Bdeployment.apps "nginx-deployment" deleted
I0916 23:01:59.178] service "nginx-deployment" deleted
W0916 23:01:59.279] E0916 23:01:59.108058   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:59.279] E0916 23:01:59.226521   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:59.338] E0916 23:01:59.338170   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:01:59.357] I0916 23:01:59.356079   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"aa97631e-87fa-4635-9c93-763c32f4b5d4", APIVersion:"v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-m5pn9
W0916 23:01:59.361] I0916 23:01:59.360365   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"aa97631e-87fa-4635-9c93-763c32f4b5d4", APIVersion:"v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r6g8x
W0916 23:01:59.362] I0916 23:01:59.361791   52967 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568674914-5705", Name:"frontend", UID:"aa97631e-87fa-4635-9c93-763c32f4b5d4", APIVersion:"v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-m5hxl
W0916 23:01:59.459] E0916 23:01:59.458511   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:01:59.560] replicationcontroller/frontend created
I0916 23:01:59.560] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I0916 23:01:59.560] (Bservice/frontend exposed
I0916 23:01:59.632] core.sh:1161: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 23:01:59.713] (Bservice/frontend-2 exposed
I0916 23:01:59.811] core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
... skipping 8 lines ...
I0916 23:02:00.701] service "frontend" deleted
I0916 23:02:00.708] service "frontend-2" deleted
I0916 23:02:00.714] service "frontend-3" deleted
I0916 23:02:00.723] service "frontend-4" deleted
I0916 23:02:00.730] service "frontend-5" deleted
I0916 23:02:00.826] Successful
I0916 23:02:00.827] message:error: cannot expose a Node
I0916 23:02:00.827] has:cannot expose
I0916 23:02:00.918] Successful
I0916 23:02:00.918] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0916 23:02:00.918] has:metadata.name: Invalid value
I0916 23:02:01.014] Successful
I0916 23:02:01.015] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I0916 23:02:01.015] has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I0916 23:02:01.094] service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
W0916 23:02:01.195] E0916 23:02:00.109568   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:02:01.196] E0916 23:02:00.228532   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:02:01.196] E0916 23:02:00.339794   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:02:01.196] E0916 23:02:00.460108   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:02:01.197] E0916 23:02:01.111000   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 23:02:01.231] E0916 23:02:01.230425   52967 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 23:02:01.332] Successful
I0916 23:02:01.333] message:service/etcd-server exposed
I0916 23:02:01.333] has:etcd-server exposed
I0916 23:02:01.333] core.sh:1208: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
I0916 23:02:01.394] (Bcore.sh:1209: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
I0916 23:02:01.485] (Bservice "etcd-server" deleted
I0916 23:02:01.585] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0916 23:02:01.667] (Breplicationcontroller "frontend" deleted
I0916 23:02:01.768] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 23:02:01.859] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916