This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat(scheduler): expose SharedInformerFactory to the framework handle
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-10-10 13:58
Elapsed30m34s
Revision
Buildergke-prow-ssd-pool-1a225945-8wl3
Refs master:4fb75e2f
83663:d7db0e24
podf868d825-eb65-11e9-ba0b-92ceaad5545b
infra-commit723ca7ced
podf868d825-eb65-11e9-ba0b-92ceaad5545b
repok8s.io/kubernetes
repo-commitc11ab40cfe58458168be91e94e3658a93a4ef813
repos{u'k8s.io/kubernetes': u'master:4fb75e2f0d9a36c47edcf65f89bb92f20274ee56,83663:d7db0e245dd0e2b3a3316a26cf988aa7263f4210'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 34s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W1010 14:24:31.171823  108280 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1010 14:24:31.172674  108280 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1010 14:24:31.172731  108280 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1010 14:24:31.172746  108280 master.go:261] Using reconciler: 
I1010 14:24:31.175005  108280 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.176097  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.176298  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.184076  108280 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1010 14:24:31.185076  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.183976  108280 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1010 14:24:31.187503  108280 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.188599  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.188750  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.190762  108280 store.go:1342] Monitoring events count at <storage-prefix>//events
I1010 14:24:31.190942  108280 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1010 14:24:31.192441  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.193529  108280 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.194126  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.194263  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.197788  108280 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1010 14:24:31.198614  108280 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1010 14:24:31.203220  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.204713  108280 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.205258  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.205438  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.206651  108280 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1010 14:24:31.206688  108280 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1010 14:24:31.206988  108280 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.207218  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.207242  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.208317  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.208357  108280 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1010 14:24:31.208545  108280 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.208702  108280 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1010 14:24:31.208768  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.208787  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.210275  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.210683  108280 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1010 14:24:31.210992  108280 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1010 14:24:31.211800  108280 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.212078  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.212101  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.214034  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.214142  108280 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1010 14:24:31.214270  108280 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1010 14:24:31.214335  108280 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.214555  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.214577  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.215640  108280 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1010 14:24:31.215820  108280 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.216056  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.216079  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.216159  108280 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1010 14:24:31.216200  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.217516  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.217659  108280 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1010 14:24:31.217740  108280 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1010 14:24:31.217870  108280 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.218099  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.218123  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.219786  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.220152  108280 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1010 14:24:31.220321  108280 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.220559  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.220581  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.220685  108280 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1010 14:24:31.223264  108280 watch_cache.go:451] Replace watchCache (rev: 30658) 
I1010 14:24:31.224009  108280 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1010 14:24:31.224055  108280 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1010 14:24:31.224199  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.224401  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.224427  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.224963  108280 watch_cache.go:451] Replace watchCache (rev: 30659) 
I1010 14:24:31.225337  108280 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1010 14:24:31.225561  108280 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1010 14:24:31.225550  108280 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.225805  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.225824  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.226723  108280 watch_cache.go:451] Replace watchCache (rev: 30659) 
I1010 14:24:31.228102  108280 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1010 14:24:31.228524  108280 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.229054  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.229314  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.228273  108280 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1010 14:24:31.230691  108280 watch_cache.go:451] Replace watchCache (rev: 30659) 
I1010 14:24:31.230953  108280 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1010 14:24:31.231257  108280 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1010 14:24:31.231300  108280 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.231555  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.231577  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.232387  108280 watch_cache.go:451] Replace watchCache (rev: 30659) 
I1010 14:24:31.232837  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.232883  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.234503  108280 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.234752  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.234773  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.236076  108280 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1010 14:24:31.236100  108280 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1010 14:24:31.236674  108280 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.236947  108280 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.237749  108280 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.238457  108280 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1010 14:24:31.239166  108280 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.240266  108280 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.241379  108280 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.242502  108280 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.244715  108280 watch_cache.go:451] Replace watchCache (rev: 30659) 
I1010 14:24:31.246266  108280 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.247105  108280 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.248330  108280 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.250338  108280 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.250751  108280 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.252042  108280 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.252750  108280 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.253772  108280 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.254575  108280 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.255561  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.256092  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.256493  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.256889  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.257474  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.257893  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.258368  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.260066  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.260763  108280 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.262340  108280 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.264096  108280 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.264631  108280 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.265164  108280 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.266088  108280 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.266595  108280 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.267643  108280 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.268815  108280 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.270336  108280 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.271419  108280 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.271956  108280 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.272197  108280 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1010 14:24:31.272300  108280 master.go:464] Enabling API group "authentication.k8s.io".
I1010 14:24:31.272420  108280 master.go:464] Enabling API group "authorization.k8s.io".
I1010 14:24:31.273995  108280 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.274392  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.274452  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.276416  108280 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1010 14:24:31.276502  108280 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1010 14:24:31.277789  108280 watch_cache.go:451] Replace watchCache (rev: 30660) 
I1010 14:24:31.281493  108280 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.282108  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.282371  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.283659  108280 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1010 14:24:31.283885  108280 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.284094  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.284115  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.284229  108280 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1010 14:24:31.285350  108280 watch_cache.go:451] Replace watchCache (rev: 30660) 
I1010 14:24:31.285683  108280 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1010 14:24:31.285701  108280 master.go:464] Enabling API group "autoscaling".
I1010 14:24:31.285915  108280 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.286159  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.286179  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.286258  108280 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1010 14:24:31.288337  108280 watch_cache.go:451] Replace watchCache (rev: 30660) 
I1010 14:24:31.288381  108280 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1010 14:24:31.289004  108280 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1010 14:24:31.290021  108280 watch_cache.go:451] Replace watchCache (rev: 30660) 
I1010 14:24:31.290499  108280 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.290751  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.290776  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.292940  108280 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1010 14:24:31.293258  108280 master.go:464] Enabling API group "batch".
I1010 14:24:31.293158  108280 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1010 14:24:31.295159  108280 watch_cache.go:451] Replace watchCache (rev: 30660) 
I1010 14:24:31.297182  108280 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.297751  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.298009  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.302976  108280 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1010 14:24:31.304342  108280 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1010 14:24:31.305830  108280 watch_cache.go:451] Replace watchCache (rev: 30661) 
I1010 14:24:31.326299  108280 master.go:464] Enabling API group "certificates.k8s.io".
I1010 14:24:31.326750  108280 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.327171  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.327312  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.329549  108280 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1010 14:24:31.329756  108280 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1010 14:24:31.331108  108280 watch_cache.go:451] Replace watchCache (rev: 30661) 
I1010 14:24:31.333274  108280 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.333646  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.333764  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.335366  108280 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1010 14:24:31.336717  108280 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1010 14:24:31.337741  108280 watch_cache.go:451] Replace watchCache (rev: 30661) 
I1010 14:24:31.339906  108280 master.go:464] Enabling API group "coordination.k8s.io".
I1010 14:24:31.342947  108280 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1010 14:24:31.343438  108280 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.343905  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.344023  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.346762  108280 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1010 14:24:31.347036  108280 master.go:464] Enabling API group "extensions".
I1010 14:24:31.346946  108280 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1010 14:24:31.349565  108280 watch_cache.go:451] Replace watchCache (rev: 30662) 
I1010 14:24:31.350970  108280 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.351379  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.351503  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.353419  108280 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1010 14:24:31.353604  108280 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1010 14:24:31.366013  108280 watch_cache.go:451] Replace watchCache (rev: 30662) 
I1010 14:24:31.368235  108280 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.368609  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.368721  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.370884  108280 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1010 14:24:31.371722  108280 watch_cache.go:451] Replace watchCache (rev: 30662) 
I1010 14:24:31.372829  108280 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1010 14:24:31.373827  108280 master.go:464] Enabling API group "networking.k8s.io".
I1010 14:24:31.374109  108280 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.374515  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.374682  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.375986  108280 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1010 14:24:31.376189  108280 master.go:464] Enabling API group "node.k8s.io".
I1010 14:24:31.376095  108280 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1010 14:24:31.378141  108280 watch_cache.go:451] Replace watchCache (rev: 30662) 
I1010 14:24:31.379219  108280 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.384490  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.384709  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.389277  108280 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1010 14:24:31.390615  108280 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1010 14:24:31.390765  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.390991  108280 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.392341  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.392468  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.393556  108280 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1010 14:24:31.393587  108280 master.go:464] Enabling API group "policy".
I1010 14:24:31.393664  108280 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.393761  108280 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1010 14:24:31.393998  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.394030  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.396600  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.397970  108280 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1010 14:24:31.398215  108280 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.398353  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.398375  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.398473  108280 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1010 14:24:31.399925  108280 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1010 14:24:31.400013  108280 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.400218  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.400258  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.400356  108280 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1010 14:24:31.400725  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.402181  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.402191  108280 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1010 14:24:31.402391  108280 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.402449  108280 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1010 14:24:31.402558  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.402577  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.403315  108280 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1010 14:24:31.403390  108280 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.403527  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.403557  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.403638  108280 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1010 14:24:31.404153  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.405931  108280 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1010 14:24:31.406123  108280 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.406183  108280 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1010 14:24:31.406229  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.406271  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.406288  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.407664  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.408464  108280 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1010 14:24:31.408512  108280 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.408572  108280 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1010 14:24:31.408649  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.408670  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.409230  108280 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1010 14:24:31.409422  108280 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.409889  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.409913  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.409999  108280 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1010 14:24:31.410143  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.412936  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.413006  108280 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1010 14:24:31.413058  108280 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1010 14:24:31.413198  108280 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1010 14:24:31.414205  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.415547  108280 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.415707  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.415727  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.416950  108280 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1010 14:24:31.417344  108280 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.417109  108280 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1010 14:24:31.419158  108280 watch_cache.go:451] Replace watchCache (rev: 30663) 
I1010 14:24:31.420131  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.420160  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.428044  108280 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1010 14:24:31.428088  108280 master.go:464] Enabling API group "scheduling.k8s.io".
I1010 14:24:31.428313  108280 master.go:453] Skipping disabled API group "settings.k8s.io".
I1010 14:24:31.428376  108280 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1010 14:24:31.428547  108280 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.428978  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.429065  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.429219  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.430480  108280 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1010 14:24:31.430514  108280 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1010 14:24:31.430698  108280 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.430925  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.430965  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.431453  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.432355  108280 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1010 14:24:31.432417  108280 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.432566  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.432598  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.432571  108280 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1010 14:24:31.433379  108280 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1010 14:24:31.433425  108280 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1010 14:24:31.433437  108280 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.433593  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.433617  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.434185  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.434809  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.435268  108280 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1010 14:24:31.435587  108280 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.435689  108280 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1010 14:24:31.435928  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.436014  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.436759  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.437406  108280 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1010 14:24:31.437502  108280 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1010 14:24:31.437733  108280 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.437983  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.438062  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.438437  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.450083  108280 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1010 14:24:31.451072  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.452149  108280 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1010 14:24:31.452418  108280 master.go:464] Enabling API group "storage.k8s.io".
I1010 14:24:31.453513  108280 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.455207  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.455248  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.456134  108280 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1010 14:24:31.456320  108280 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.456420  108280 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1010 14:24:31.456474  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.456490  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.457470  108280 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1010 14:24:31.457660  108280 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.457789  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.457808  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.457910  108280 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1010 14:24:31.460190  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.460323  108280 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1010 14:24:31.460521  108280 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1010 14:24:31.460511  108280 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.460625  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.460642  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.462381  108280 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1010 14:24:31.462479  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.462548  108280 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.462678  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.462681  108280 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1010 14:24:31.462696  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.464930  108280 watch_cache.go:451] Replace watchCache (rev: 30664) 
I1010 14:24:31.466732  108280 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1010 14:24:31.466769  108280 master.go:464] Enabling API group "apps".
I1010 14:24:31.466913  108280 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1010 14:24:31.467134  108280 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.467314  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.467338  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.468284  108280 watch_cache.go:451] Replace watchCache (rev: 30665) 
I1010 14:24:31.468524  108280 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1010 14:24:31.468576  108280 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.468659  108280 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1010 14:24:31.468730  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.468760  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.468993  108280 watch_cache.go:451] Replace watchCache (rev: 30665) 
I1010 14:24:31.469631  108280 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1010 14:24:31.469688  108280 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.469817  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.469833  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.469933  108280 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1010 14:24:31.471249  108280 watch_cache.go:451] Replace watchCache (rev: 30665) 
I1010 14:24:31.471350  108280 watch_cache.go:451] Replace watchCache (rev: 30665) 
I1010 14:24:31.471504  108280 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1010 14:24:31.471553  108280 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.471597  108280 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1010 14:24:31.471660  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.471677  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.472482  108280 watch_cache.go:451] Replace watchCache (rev: 30665) 
I1010 14:24:31.472559  108280 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1010 14:24:31.472581  108280 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1010 14:24:31.472637  108280 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.472674  108280 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1010 14:24:31.472980  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:31.473009  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:31.474308  108280 store.go:1342] Monitoring events count at <storage-prefix>//events
I1010 14:24:31.474344  108280 master.go:464] Enabling API group "events.k8s.io".
I1010 14:24:31.474355  108280 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1010 14:24:31.474639  108280 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.474888  108280 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.475206  108280 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.475338  108280 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.475449  108280 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.475554  108280 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.475768  108280 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.475816  108280 watch_cache.go:451] Replace watchCache (rev: 30665) 
I1010 14:24:31.476371  108280 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.476504  108280 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.476624  108280 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.476927  108280 watch_cache.go:451] Replace watchCache (rev: 30665) 
I1010 14:24:31.478616  108280 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.479065  108280 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.482332  108280 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.482828  108280 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.483983  108280 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.484438  108280 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.485658  108280 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.486206  108280 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.487187  108280 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.487791  108280 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 14:24:31.487984  108280 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1010 14:24:31.488969  108280 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.489384  108280 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.489904  108280 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.491132  108280 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.492276  108280 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.493332  108280 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.493777  108280 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.497533  108280 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.498915  108280 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.499410  108280 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.505155  108280 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 14:24:31.506916  108280 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1010 14:24:31.509962  108280 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.510520  108280 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.511417  108280 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.512427  108280 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.513036  108280 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.513927  108280 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.514714  108280 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.515412  108280 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.515973  108280 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.516711  108280 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.517448  108280 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 14:24:31.517530  108280 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1010 14:24:31.518232  108280 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.518964  108280 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 14:24:31.519169  108280 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1010 14:24:31.520343  108280 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.521580  108280 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.522442  108280 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.523281  108280 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.524216  108280 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.525130  108280 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.526053  108280 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 14:24:31.526317  108280 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1010 14:24:31.527800  108280 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.529175  108280 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.529722  108280 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.530837  108280 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.531347  108280 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.532021  108280 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.533123  108280 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.533806  108280 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.534379  108280 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.535516  108280 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.536236  108280 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.536803  108280 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1010 14:24:31.537068  108280 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1010 14:24:31.537204  108280 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1010 14:24:31.538499  108280 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.542895  108280 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.544761  108280 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.551216  108280 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.552458  108280 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20e620b6-3220-4afc-a00c-219151b337b3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1010 14:24:31.557229  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.557257  108280 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1010 14:24:31.557268  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.557279  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.557288  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.557296  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.557339  108280 httplog.go:90] GET /healthz: (224.129µs) 0 [Go-http-client/1.1 127.0.0.1:33648]
I1010 14:24:31.558606  108280 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.381ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:31.562057  108280 httplog.go:90] GET /api/v1/services: (1.37406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:31.566271  108280 httplog.go:90] GET /api/v1/services: (1.143851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:31.568405  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.568436  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.568450  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.568458  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.568469  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.568503  108280 httplog.go:90] GET /healthz: (212.046µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:31.569304  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.249543ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:31.571726  108280 httplog.go:90] POST /api/v1/namespaces: (1.763706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33652]
I1010 14:24:31.572533  108280 httplog.go:90] GET /api/v1/services: (2.881974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:31.572690  108280 httplog.go:90] GET /api/v1/services: (3.262486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:31.573922  108280 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.217627ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33652]
I1010 14:24:31.575792  108280 httplog.go:90] POST /api/v1/namespaces: (1.417658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:31.577335  108280 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.053279ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:31.579034  108280 httplog.go:90] POST /api/v1/namespaces: (1.309906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:31.658149  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.658186  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.658199  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.658209  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.658219  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.658258  108280 httplog.go:90] GET /healthz: (261.415µs) 0 [Go-http-client/1.1 127.0.0.1:33648]
I1010 14:24:31.669178  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.669210  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.669222  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.669232  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.669240  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.669268  108280 httplog.go:90] GET /healthz: (236.616µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:31.758181  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.758214  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.758230  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.758241  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.758254  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.758290  108280 httplog.go:90] GET /healthz: (284.951µs) 0 [Go-http-client/1.1 127.0.0.1:33648]
I1010 14:24:31.769322  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.769361  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.769374  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.769386  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.769394  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.769421  108280 httplog.go:90] GET /healthz: (267.926µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:31.858232  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.858265  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.858277  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.858287  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.858295  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.858337  108280 httplog.go:90] GET /healthz: (277.075µs) 0 [Go-http-client/1.1 127.0.0.1:33648]
I1010 14:24:31.869321  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.869365  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.869381  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.869391  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.869401  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.869437  108280 httplog.go:90] GET /healthz: (301.672µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:31.958121  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.958161  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.958175  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.958185  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.958193  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.958228  108280 httplog.go:90] GET /healthz: (259.038µs) 0 [Go-http-client/1.1 127.0.0.1:33648]
E1010 14:24:31.968043  108280 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:37751/apis/events.k8s.io/v1beta1/namespaces/permit-plugin92594c78-ccf3-4bfc-b15b-a7a669e2c632/events: dial tcp 127.0.0.1:37751: connect: connection refused' (may retry after sleeping)
I1010 14:24:31.969309  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:31.969354  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:31.969369  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:31.969382  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:31.969391  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:31.969425  108280 httplog.go:90] GET /healthz: (255.865µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.058219  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:32.058263  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.058279  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.058289  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.058299  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.058337  108280 httplog.go:90] GET /healthz: (303.101µs) 0 [Go-http-client/1.1 127.0.0.1:33648]
I1010 14:24:32.072603  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:32.072640  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.072652  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.072662  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.072670  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.072706  108280 httplog.go:90] GET /healthz: (242.815µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.158212  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:32.158253  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.158265  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.158274  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.158283  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.158328  108280 httplog.go:90] GET /healthz: (295.862µs) 0 [Go-http-client/1.1 127.0.0.1:33648]
I1010 14:24:32.169388  108280 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1010 14:24:32.169421  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.169434  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.169448  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.169456  108280 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.169487  108280 httplog.go:90] GET /healthz: (290.128µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.172494  108280 client.go:361] parsed scheme: "endpoint"
I1010 14:24:32.172579  108280 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1010 14:24:32.259638  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.259679  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.259689  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.259698  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.259756  108280 httplog.go:90] GET /healthz: (1.69091ms) 0 [Go-http-client/1.1 127.0.0.1:33648]
I1010 14:24:32.270757  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.270800  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.270812  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.270821  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.270870  108280 httplog.go:90] GET /healthz: (1.162896ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.359302  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.359339  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.359350  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.359359  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.359404  108280 httplog.go:90] GET /healthz: (1.396388ms) 0 [Go-http-client/1.1 127.0.0.1:33648]
I1010 14:24:32.373229  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.373257  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.373267  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.373277  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.373323  108280 httplog.go:90] GET /healthz: (4.170243ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.459359  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.459397  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.459408  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.459416  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.459475  108280 httplog.go:90] GET /healthz: (1.400442ms) 0 [Go-http-client/1.1 127.0.0.1:33648]
I1010 14:24:32.470581  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.470612  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.470623  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.470632  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.470679  108280 httplog.go:90] GET /healthz: (1.479126ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.559088  108280 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.589562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.559511  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.110199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.561814  108280 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.973723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.562165  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.562189  108280 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1010 14:24:32.562213  108280 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1010 14:24:32.562222  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1010 14:24:32.562258  108280 httplog.go:90] GET /healthz: (2.917308ms) 0 [Go-http-client/1.1 127.0.0.1:33836]
I1010 14:24:32.563022  108280 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.004156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.563439  108280 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1010 14:24:32.564070  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.19319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I1010 14:24:32.564342  108280 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.97561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.564776  108280 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.001233ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.565866  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (927.934µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.566703  108280 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.50509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.566956  108280 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1010 14:24:32.566989  108280 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1010 14:24:32.567594  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.267105ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I1010 14:24:32.569374  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.078393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.569967  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.569985  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:32.570010  108280 httplog.go:90] GET /healthz: (980.631µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:32.570613  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (940.271µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.572146  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.189031ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.575046  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.606562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.585796  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (10.464623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.587377  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.172578ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.588695  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (969.727µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.591004  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.84115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.591259  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1010 14:24:32.592272  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (848.58µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.594176  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.543997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.594459  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1010 14:24:32.595476  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (868.072µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.597480  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.557533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.597733  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1010 14:24:32.598626  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (685.011µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.600412  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.37079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.600657  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1010 14:24:32.601676  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (877.156µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.604397  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.39064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.604591  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1010 14:24:32.605804  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.09018ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.617926  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (11.78849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.618135  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1010 14:24:32.619713  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.427942ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.623583  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.782772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.623866  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1010 14:24:32.625146  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.005251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.627424  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.737306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.627609  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1010 14:24:32.628829  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.032574ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.632175  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.835089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.632474  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1010 14:24:32.633545  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (887.184µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.635979  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.742428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.636283  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1010 14:24:32.637273  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (800.785µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.639356  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.545707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.639632  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1010 14:24:32.640715  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (819.794µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.646014  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.724523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.646478  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1010 14:24:32.647871  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (982.637µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.649658  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.462698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.650003  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1010 14:24:32.651238  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (943.293µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.653935  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.954683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.654274  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1010 14:24:32.655733  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.150032ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.658103  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.790245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.658360  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1010 14:24:32.658722  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.658746  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:32.658781  108280 httplog.go:90] GET /healthz: (907.967µs) 0 [Go-http-client/1.1 127.0.0.1:33836]
I1010 14:24:32.659278  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (759.72µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.661083  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.386486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.661279  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1010 14:24:32.662236  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (790.078µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.664736  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.99426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.665079  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1010 14:24:32.666172  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (776.65µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.668375  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.746921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.668795  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1010 14:24:32.669773  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (668.249µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.670111  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.670142  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:32.670175  108280 httplog.go:90] GET /healthz: (758.722µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:32.671440  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.363234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.671612  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1010 14:24:32.673238  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.325457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.675430  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.761398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.675700  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1010 14:24:32.676964  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.039082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.680941  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.175849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.681181  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1010 14:24:32.682311  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (918.169µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.684782  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.012803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.684980  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1010 14:24:32.690214  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (5.032339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.692533  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.910747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.692807  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1010 14:24:32.694258  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.066808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.698654  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.095254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.699151  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1010 14:24:32.700384  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (914.136µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.702235  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.209505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.702500  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1010 14:24:32.703641  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (795.756µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.706208  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.998397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.706548  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1010 14:24:32.707667  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (848.269µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.710062  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.857095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.710262  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1010 14:24:32.712987  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.240366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.719754  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.166225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.721860  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1010 14:24:32.723980  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (863.481µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.728676  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.34153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.728899  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1010 14:24:32.731144  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.962024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.735454  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.77154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.736624  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1010 14:24:32.738149  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.317646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.740793  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.019583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.741158  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1010 14:24:32.742575  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.224643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.746006  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.7256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.746225  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1010 14:24:32.747272  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (842.749µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.749680  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.976017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.750154  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1010 14:24:32.751337  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (838.692µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.753778  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.590609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.754190  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1010 14:24:32.755395  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (976.88µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.757178  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.457956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.757375  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1010 14:24:32.758412  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (839.935µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.760730  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.960448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.761059  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1010 14:24:32.762057  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.762095  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:32.762153  108280 httplog.go:90] GET /healthz: (3.848063ms) 0 [Go-http-client/1.1 127.0.0.1:33836]
I1010 14:24:32.762704  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.287289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.765069  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.827571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.765417  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1010 14:24:32.766804  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (909.072µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.769001  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.769959  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1010 14:24:32.770025  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.770432  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:32.770625  108280 httplog.go:90] GET /healthz: (1.602572ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:32.771393  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.021318ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.773470  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.589542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.773747  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1010 14:24:32.774823  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (854.075µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.777617  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.236494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.777942  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1010 14:24:32.779107  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (924.443µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.781276  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.593143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.781621  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1010 14:24:32.783138  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.070915ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.785443  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.835418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.785768  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1010 14:24:32.786725  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (691.211µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.788553  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.349276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.788724  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1010 14:24:32.790005  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.084345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.792317  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.773607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.792607  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1010 14:24:32.793803  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (973.95µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.796042  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.85967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.796352  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1010 14:24:32.797429  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (889.477µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.799776  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.428166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.800154  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1010 14:24:32.801162  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (780.74µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.803344  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.688606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.803605  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1010 14:24:32.805074  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.041675ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.807428  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.683944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.807822  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1010 14:24:32.809546  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.484532ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.812571  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.926914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.813048  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1010 14:24:32.814558  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.001214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.817251  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.170275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.817486  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1010 14:24:32.819619  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.918611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.822163  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.882277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.822519  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1010 14:24:32.824005  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.17726ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.826751  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.042668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.827283  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1010 14:24:32.839388  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (979.09µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.859096  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.859134  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:32.859179  108280 httplog.go:90] GET /healthz: (1.170081ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:32.860161  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.714969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:32.860435  108280 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1010 14:24:32.893037  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.893079  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:32.893133  108280 httplog.go:90] GET /healthz: (24.086715ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:32.893219  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (14.874028ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.901144  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.624371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.901441  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1010 14:24:32.924055  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (5.079197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.940820  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.286632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:32.941140  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1010 14:24:32.959320  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.959352  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:32.959392  108280 httplog.go:90] GET /healthz: (1.44976ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:32.959556  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.193446ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:32.969893  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:32.969921  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:32.969949  108280 httplog.go:90] GET /healthz: (889.658µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:32.983861  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.398699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:32.984159  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1010 14:24:32.999770  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.272123ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.020974  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.448024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.021477  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1010 14:24:33.040013  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.513052ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.060538  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.060571  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.060645  108280 httplog.go:90] GET /healthz: (2.636735ms) 0 [Go-http-client/1.1 127.0.0.1:33836]
I1010 14:24:33.062301  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.798668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.062501  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1010 14:24:33.070019  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.070044  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.070075  108280 httplog.go:90] GET /healthz: (986.661µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.080719  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (2.286977ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.101262  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.807167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.101633  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1010 14:24:33.120459  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.539044ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.141291  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.685886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.141561  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1010 14:24:33.159215  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.159254  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.159297  108280 httplog.go:90] GET /healthz: (1.272271ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:33.160113  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.641225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.170227  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.170498  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.170540  108280 httplog.go:90] GET /healthz: (1.439466ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.181614  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.946231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.181863  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1010 14:24:33.200318  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.771839ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.223278  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.740574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.223594  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1010 14:24:33.240523  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.84011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.260476  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.260582  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.033491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.261405  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1010 14:24:33.261622  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.262130  108280 httplog.go:90] GET /healthz: (3.975669ms) 0 [Go-http-client/1.1 127.0.0.1:33836]
I1010 14:24:33.270856  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.271099  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.271337  108280 httplog.go:90] GET /healthz: (2.157927ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.280433  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.926723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.301112  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.584791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.301570  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1010 14:24:33.320595  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.66761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.341338  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.781474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.344222  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1010 14:24:33.361766  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.440773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.362267  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.362302  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.362351  108280 httplog.go:90] GET /healthz: (1.409299ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:33.370764  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.370797  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.370899  108280 httplog.go:90] GET /healthz: (1.554793ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.381070  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.64134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.381483  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1010 14:24:33.400460  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.918902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.420745  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.245816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.421182  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1010 14:24:33.440224  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.585005ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.460962  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.461322  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.461254  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.714015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.461629  108280 httplog.go:90] GET /healthz: (3.536164ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:33.462384  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1010 14:24:33.481457  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.655648ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.481649  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.482014  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.482271  108280 httplog.go:90] GET /healthz: (2.428664ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.500808  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.203895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.501166  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1010 14:24:33.520142  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.649433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.540734  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.24511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.541095  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1010 14:24:33.560939  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.266754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.563591  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.563618  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.563663  108280 httplog.go:90] GET /healthz: (3.183936ms) 0 [Go-http-client/1.1 127.0.0.1:33836]
I1010 14:24:33.570762  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.570812  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.570885  108280 httplog.go:90] GET /healthz: (1.100738ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.580441  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.963478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.580675  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1010 14:24:33.599880  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.346752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.620661  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.208557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.620887  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1010 14:24:33.640479  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.4981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.661012  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.519794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.661281  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1010 14:24:33.662517  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.662541  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.662581  108280 httplog.go:90] GET /healthz: (4.43516ms) 0 [Go-http-client/1.1 127.0.0.1:33836]
I1010 14:24:33.670163  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.670189  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.670230  108280 httplog.go:90] GET /healthz: (1.117577ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.680020  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.509633ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.701565  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.659911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.701775  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1010 14:24:33.726731  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (8.204384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.745746  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.257887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.746187  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1010 14:24:33.759895  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.398938ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.762128  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.762165  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.762209  108280 httplog.go:90] GET /healthz: (1.5999ms) 0 [Go-http-client/1.1 127.0.0.1:33836]
I1010 14:24:33.770080  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.770108  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.770140  108280 httplog.go:90] GET /healthz: (918.347µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.780589  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.017841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.780813  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1010 14:24:33.799941  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.450747ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.820800  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.351737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.821916  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1010 14:24:33.840463  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.897631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.872742  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.872781  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.872824  108280 httplog.go:90] GET /healthz: (1.143533ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:33.872833  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.872880  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.872911  108280 httplog.go:90] GET /healthz: (1.689734ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:33.873891  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.576099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I1010 14:24:33.874173  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1010 14:24:33.881832  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.325046ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:33.901137  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.612091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:33.901393  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1010 14:24:33.919830  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.386833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:33.940291  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.840875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:33.940508  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1010 14:24:33.959236  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.959270  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.959305  108280 httplog.go:90] GET /healthz: (1.115832ms) 0 [Go-http-client/1.1 127.0.0.1:33884]
I1010 14:24:33.959601  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.173134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.970643  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:33.970680  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:33.970718  108280 httplog.go:90] GET /healthz: (1.425775ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.981274  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.685519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:33.981506  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1010 14:24:34.000593  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.019518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.021576  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.987629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.021880  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1010 14:24:34.040645  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.779109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.059402  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.059435  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.059486  108280 httplog.go:90] GET /healthz: (1.388576ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:34.060770  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.127849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.061147  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1010 14:24:34.072353  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.072390  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.072421  108280 httplog.go:90] GET /healthz: (1.162446ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.079496  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.074996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.101165  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.318153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.101435  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1010 14:24:34.119443  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (999.036µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.143294  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.396954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.143593  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1010 14:24:34.160579  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.160609  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.160642  108280 httplog.go:90] GET /healthz: (1.336947ms) 0 [Go-http-client/1.1 127.0.0.1:33884]
I1010 14:24:34.160734  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.850168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.170444  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.170499  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.170542  108280 httplog.go:90] GET /healthz: (1.358982ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.180833  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.299724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.181463  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1010 14:24:34.200157  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.597845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.221351  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.829759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.221668  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1010 14:24:34.240000  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.394488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.261773  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.261818  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.261877  108280 httplog.go:90] GET /healthz: (1.907458ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:34.262676  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.640925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.262952  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1010 14:24:34.271072  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.271105  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.271141  108280 httplog.go:90] GET /healthz: (2.022345ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.280140  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.694205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.300709  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.180325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.300989  108280 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1010 14:24:34.320097  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.621306ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.321570  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.116572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.340718  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.194016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.340985  108280 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1010 14:24:34.359895  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.359931  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.359971  108280 httplog.go:90] GET /healthz: (1.990193ms) 0 [Go-http-client/1.1 127.0.0.1:33884]
I1010 14:24:34.360265  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.810592ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.362171  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.38863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.370394  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.370425  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.370461  108280 httplog.go:90] GET /healthz: (1.318727ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.381144  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.350884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.381652  108280 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1010 14:24:34.399643  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.140134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.401085  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.151895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.420784  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.242472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.421120  108280 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1010 14:24:34.446509  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.221553ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.448560  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.45396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.460119  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.460146  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.460180  108280 httplog.go:90] GET /healthz: (2.147781ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:34.460829  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.378294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.461504  108280 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1010 14:24:34.469991  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.470015  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.470047  108280 httplog.go:90] GET /healthz: (1.018107ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.479679  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.191868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.481330  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.283744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.501621  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.04428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.501985  108280 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1010 14:24:34.519770  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.350653ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.521343  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.150232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.540622  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.143662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.540876  108280 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1010 14:24:34.559690  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.559731  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.559770  108280 httplog.go:90] GET /healthz: (1.818811ms) 0 [Go-http-client/1.1 127.0.0.1:33884]
I1010 14:24:34.559833  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.47701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.565168  108280 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.905286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.569833  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.569937  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.569968  108280 httplog.go:90] GET /healthz: (973.36µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.580447  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.014217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.580810  108280 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1010 14:24:34.599772  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.332598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.601778  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.634743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.621068  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.496004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.621328  108280 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1010 14:24:34.644172  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.252143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.645822  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.049427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.659269  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.659295  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.659323  108280 httplog.go:90] GET /healthz: (1.472649ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:34.661224  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.528476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.661449  108280 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1010 14:24:34.670129  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.670155  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.670184  108280 httplog.go:90] GET /healthz: (961.338µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.679626  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.208267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.681475  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.440341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.701281  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.597524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.701641  108280 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1010 14:24:34.720442  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.843359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.722543  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.522972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.740878  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.156812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.741145  108280 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1010 14:24:34.761928  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.761969  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.762014  108280 httplog.go:90] GET /healthz: (3.870612ms) 0 [Go-http-client/1.1 127.0.0.1:33884]
I1010 14:24:34.762097  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (3.37845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.764295  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.634211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.770117  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.770156  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.770216  108280 httplog.go:90] GET /healthz: (1.133147ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
E1010 14:24:34.770298  108280 factory.go:701] Error getting pod permit-plugin92594c78-ccf3-4bfc-b15b-a7a669e2c632/test-pod for retry: Get http://127.0.0.1:37751/api/v1/namespaces/permit-plugin92594c78-ccf3-4bfc-b15b-a7a669e2c632/pods/test-pod: dial tcp 127.0.0.1:37751: connect: connection refused; retrying...
I1010 14:24:34.781306  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.800526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.781560  108280 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1010 14:24:34.800266  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.738594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.802501  108280 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.815906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.821452  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.889185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.822001  108280 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1010 14:24:34.840197  108280 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.642834ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.842155  108280 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.376085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.859480  108280 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1010 14:24:34.859516  108280 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1010 14:24:34.859558  108280 httplog.go:90] GET /healthz: (1.514522ms) 0 [Go-http-client/1.1 127.0.0.1:33650]
I1010 14:24:34.860667  108280 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.262503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:34.860940  108280 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1010 14:24:34.870329  108280 httplog.go:90] GET /healthz: (1.164739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.871947  108280 httplog.go:90] GET /api/v1/namespaces/default: (1.165023ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.874146  108280 httplog.go:90] POST /api/v1/namespaces: (1.739053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.875635  108280 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (994.363µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.886704  108280 httplog.go:90] POST /api/v1/namespaces/default/services: (10.607899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.888604  108280 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.283198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.890359  108280 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.403955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.960742  108280 httplog.go:90] GET /healthz: (1.392951ms) 200 [Go-http-client/1.1 127.0.0.1:33650]
W1010 14:24:34.961528  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961600  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961613  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961635  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961656  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961670  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961685  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961698  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961714  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961732  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1010 14:24:34.961750  108280 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1010 14:24:34.961804  108280 factory.go:295] Creating scheduler from algorithm provider 'DefaultProvider'
I1010 14:24:34.961824  108280 factory.go:383] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1010 14:24:34.962122  108280 shared_informer.go:197] Waiting for caches to sync for scheduler
I1010 14:24:34.962360  108280 reflector.go:150] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:201
I1010 14:24:34.962373  108280 reflector.go:185] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:201
I1010 14:24:34.963495  108280 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (828.364µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:24:34.964323  108280 get.go:251] Starting watch for /api/v1/pods, rv=30659 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m7s
I1010 14:24:35.062311  108280 shared_informer.go:227] caches populated
I1010 14:24:35.062352  108280 shared_informer.go:204] Caches are synced for scheduler 
I1010 14:24:35.062806  108280 reflector.go:150] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.062832  108280 reflector.go:185] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.063353  108280 reflector.go:150] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.063375  108280 reflector.go:185] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.063768  108280 reflector.go:150] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.063797  108280 reflector.go:185] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.063912  108280 reflector.go:150] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.063930  108280 reflector.go:185] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.064325  108280 reflector.go:150] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.064341  108280 reflector.go:185] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.064353  108280 reflector.go:150] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.064382  108280 reflector.go:185] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.064719  108280 reflector.go:150] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.064729  108280 reflector.go:185] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.064752  108280 reflector.go:150] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.064769  108280 reflector.go:185] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.065141  108280 reflector.go:150] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.065155  108280 reflector.go:185] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.065219  108280 reflector.go:150] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.065236  108280 reflector.go:185] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I1010 14:24:35.067230  108280 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (697.414µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:24:35.067676  108280 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (446.238µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33926]
I1010 14:24:35.067935  108280 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30664 labels= fields= timeout=8m44s
I1010 14:24:35.068146  108280 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (364.89µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I1010 14:24:35.068616  108280 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (373.253µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33920]
I1010 14:24:35.068631  108280 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (397.568µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I1010 14:24:35.069038  108280 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (335.71µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I1010 14:24:35.069126  108280 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (382.504µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33928]
I1010 14:24:35.069444  108280 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (318.972µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I1010 14:24:35.069457  108280 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30658 labels= fields= timeout=9m57s
I1010 14:24:35.069703  108280 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30664 labels= fields= timeout=7m41s
I1010 14:24:35.069864  108280 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30663 labels= fields= timeout=6m6s
I1010 14:24:35.070071  108280 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30658 labels= fields= timeout=5m49s
I1010 14:24:35.070378  108280 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30664 labels= fields= timeout=5m46s
I1010 14:24:35.070425  108280 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30664 labels= fields= timeout=7m26s
I1010 14:24:35.070599  108280 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (798.416µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33932]
I1010 14:24:35.070807  108280 get.go:251] Starting watch for /api/v1/nodes, rv=30659 labels= fields= timeout=8m55s
I1010 14:24:35.071310  108280 get.go:251] Starting watch for /api/v1/services, rv=30887 labels= fields= timeout=9m45s
I1010 14:24:35.071386  108280 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (4.689074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I1010 14:24:35.072019  108280 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30659 labels= fields= timeout=5m50s
I1010 14:24:35.163129  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163199  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163207  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163213  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163220  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163228  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163234  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163240  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163246  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163256  108280 shared_informer.go:227] caches populated
I1010 14:24:35.163268  108280 shared_informer.go:227] caches populated
I1010 14:24:35.167087  108280 httplog.go:90] POST /api/v1/nodes: (2.688403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.167723  108280 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I1010 14:24:35.170561  108280 httplog.go:90] PUT /api/v1/nodes/testnode/status: (2.749108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.176475  108280 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods: (5.07953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.177762  108280 scheduling_queue.go:883] About to try and schedule pod node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pidpressure-fake-name
I1010 14:24:35.177795  108280 scheduler.go:587] Attempting to schedule pod: node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pidpressure-fake-name
I1010 14:24:35.177944  108280 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pidpressure-fake-name", node "testnode"
I1010 14:24:35.177961  108280 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I1010 14:24:35.178020  108280 factory.go:717] Attempting to bind pidpressure-fake-name to testnode
I1010 14:24:35.180462  108280 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name/binding: (2.16434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.180669  108280 scheduler.go:719] pod node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I1010 14:24:35.182889  108280 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/events: (1.856401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.279961  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.180227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.380123  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.319222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.480103  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.236174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.579805  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.009041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.680009  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.060181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.780202  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.273819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.879962  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.067049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:35.980316  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.53847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.067815  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:36.068957  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:36.069239  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:36.069249  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:36.069818  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:36.071184  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:36.079281  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.543809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.179716  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.933685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.279629  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.810349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.379915  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.030278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.479827  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.872103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.580245  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.279033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.680539  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.24706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.779366  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.668049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.879293  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.589061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:36.980561  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.794515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.068074  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:37.069105  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:37.069438  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:37.069568  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:37.069984  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:37.071378  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:37.079460  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.724203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.183556  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (5.239081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.279937  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.104096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.380043  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.043238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.479566  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.671826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.579937  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.088687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.679589  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.809952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.779642  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.864147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.879445  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.69322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:37.979528  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.758041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.068259  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:38.069245  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:38.069582  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:38.069686  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:38.070144  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:38.071543  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:38.079252  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.542562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.180104  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.183882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.280137  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.358684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.379476  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.766386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.480723  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.885084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.579993  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.164217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.679963  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.205192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.780067  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.323936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.880258  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.360805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:38.980128  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.221131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.068444  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:39.069401  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:39.069971  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:39.069973  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:39.070297  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:39.071716  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:39.079966  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.152849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.179936  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.04461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.281486  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.376327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.380905  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.979325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.482099  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (4.051857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.582220  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.90791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.680117  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.309942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.780548  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.608602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.879983  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.165779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:39.980926  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.095449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:40.068674  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:40.069655  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:40.070186  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:40.070292  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:40.070448  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:40.071932  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:40.080135  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.231168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:40.179948  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.024157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:40.280677  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.741215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:40.380369  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.356982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:40.480469  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.14054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:40.580377  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.414797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:40.673195  108280 factory.go:717] Attempting to bind signalling-pod to test-node-0
I1010 14:24:40.673785  108280 scheduler.go:557] Failed to bind pod: permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod
E1010 14:24:40.673809  108280 scheduler.go:559] scheduler cache ForgetPod failed: pod ff873596-c25d-4052-b3cc-7dc0271bf7ff wasn't assumed so cannot be forgotten
E1010 14:24:40.673828  108280 scheduler.go:710] error binding pod: Post http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod/binding: dial tcp 127.0.0.1:45175: connect: connection refused
E1010 14:24:40.673886  108280 factory.go:668] Error scheduling permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod: Post http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod/binding: dial tcp 127.0.0.1:45175: connect: connection refused; retrying
I1010 14:24:40.673926  108280 scheduler.go:735] Updating pod condition for permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E1010 14:24:40.674424  108280 scheduler.go:390] Error updating the condition of the pod permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod: Put http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod/status: dial tcp 127.0.0.1:45175: connect: connection refused
E1010 14:24:40.674561  108280 factory.go:701] Error getting pod permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod for retry: Get http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod: dial tcp 127.0.0.1:45175: connect: connection refused; retrying...
E1010 14:24:40.675002  108280 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:45175/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/events: dial tcp 127.0.0.1:45175: connect: connection refused' (may retry after sleeping)
I1010 14:24:40.680185  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.186596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:40.779797  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.023057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
E1010 14:24:40.875285  108280 factory.go:701] Error getting pod permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod for retry: Get http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod: dial tcp 127.0.0.1:45175: connect: connection refused; retrying...
I1010 14:24:40.879805  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.087743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:40.979974  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.118863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:41.068896  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:41.069910  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:41.070318  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:41.070445  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:41.070565  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:41.072345  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:41.080304  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.340382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:41.179643  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.811817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
E1010 14:24:41.275953  108280 factory.go:701] Error getting pod permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod for retry: Get http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod: dial tcp 127.0.0.1:45175: connect: connection refused; retrying...
I1010 14:24:41.279830  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.01172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
W1010 14:24:41.340348  108280 cache.go:674] Pod permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod expired
I1010 14:24:41.379827  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.923061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:41.479535  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.772083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:41.579512  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.746174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:41.679558  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.816439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:41.780169  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.349847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:41.880454  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.513552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:41.980159  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.299434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:42.069181  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:42.070098  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:42.070462  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:42.070668  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:42.072555  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:42.075233  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
E1010 14:24:42.076624  108280 factory.go:701] Error getting pod permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod for retry: Get http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod: dial tcp 127.0.0.1:45175: connect: connection refused; retrying...
I1010 14:24:42.079684  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.989239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:42.179724  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.926441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:42.280229  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.998165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:42.379675  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.871313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:42.479916  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.151434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
E1010 14:24:42.499596  108280 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:37751/apis/events.k8s.io/v1beta1/namespaces/permit-plugin92594c78-ccf3-4bfc-b15b-a7a669e2c632/events: dial tcp 127.0.0.1:37751: connect: connection refused' (may retry after sleeping)
I1010 14:24:42.580509  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.618041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:42.679814  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.043206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:42.779544  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.777234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:42.880072  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.187943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:42.979780  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.951158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:43.069363  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:43.070239  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:43.070596  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:43.070791  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:43.072735  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:43.075391  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:43.079378  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.649575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:43.179862  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.035802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:43.280111  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.0723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:43.380153  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.296769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:43.480236  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.391745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:43.579942  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.997567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
E1010 14:24:43.677287  108280 factory.go:701] Error getting pod permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod for retry: Get http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod: dial tcp 127.0.0.1:45175: connect: connection refused; retrying...
I1010 14:24:43.680392  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.159734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:43.779379  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.625675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:43.880033  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.228979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:43.979561  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.79258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.069564  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:44.070437  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:44.070734  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:44.070938  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:44.072916  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:44.075547  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:44.079439  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.738391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.179591  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.803728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.279927  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.038419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.380341  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.444909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.480544  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.390908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.580389  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.42722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.679872  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.069014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.781669  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.754072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.873317  108280 httplog.go:90] GET /api/v1/namespaces/default: (2.265211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.877562  108280 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.30645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.880345  108280 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.913603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:44.880394  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.255125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:44.979557  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.806543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.069733  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:45.070617  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:45.070872  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:45.071088  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:45.073087  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:45.075701  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:45.079818  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.038539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.180047  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.108246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.280960  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.270775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.380041  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.207214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.479888  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.093413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.580005  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.169965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.679732  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.918374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.780944  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.11258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.880097  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.199939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:45.980033  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.180078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:46.069965  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:46.070811  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:46.071048  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:46.071244  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:46.073266  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:46.075950  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:46.080543  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.233006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:46.180430  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.507132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:46.279822  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.020969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:46.379559  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.790137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:46.479534  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.816729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:46.579785  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.965777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:46.679574  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.756246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:46.779532  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.78268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
E1010 14:24:46.878024  108280 factory.go:701] Error getting pod permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod for retry: Get http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod: dial tcp 127.0.0.1:45175: connect: connection refused; retrying...
I1010 14:24:46.879377  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.442672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:46.979803  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.065356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.070288  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:47.071013  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:47.071202  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:47.071655  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:47.073739  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:47.076189  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:47.080517  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.620681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.180031  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.227978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.280601  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.849402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.380031  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.168385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.481589  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.627662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.579898  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.966394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.679816  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.960933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.780138  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.30898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.879780  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.014957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:47.980145  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.367948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.070711  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:48.071141  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:48.071360  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:48.071836  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:48.073952  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:48.076302  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:48.079947  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.23038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.182470  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.441502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.279940  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.103118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.380243  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.383108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.479638  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.893073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.580411  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.61758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.679339  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.598206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.779414  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.579641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.880082  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.914524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:48.979836  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.037591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.071378  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:49.071730  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:49.071763  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:49.072012  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:49.074172  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:49.076488  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:49.080752  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.061635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.180327  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.487184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.280812  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.941158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.380231  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.426214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.479681  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.933922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.579391  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.653882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.679180  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.493628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.781050  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.955775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.879743  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.934773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:49.979580  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.762069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.071583  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:50.071887  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:50.071895  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:50.072326  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:50.074376  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:50.076691  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:50.079421  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.66861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.179904  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.058307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.279640  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.9006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.380077  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.24528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.480388  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.605464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.580222  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.275339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.679951  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.005502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.780366  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.507296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.880101  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.170007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:50.980093  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.15405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:51.071812  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:51.072029  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:51.072069  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:51.072665  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:51.074524  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:51.076997  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:51.079744  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.091969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:51.180075  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.301305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:51.280295  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.519871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:51.379768  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.886176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
E1010 14:24:51.463474  108280 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:45175/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/events: dial tcp 127.0.0.1:45175: connect: connection refused' (may retry after sleeping)
I1010 14:24:51.479452  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.775759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:51.580192  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.259032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:51.680274  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.31772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:51.779943  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.084219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:51.879595  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.697092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:51.979981  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.092257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.072065  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:52.072183  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:52.072235  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:52.072880  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:52.074827  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:52.077172  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:52.079707  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.776264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.179893  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.86997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.280222  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.065691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.380308  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.490205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.479892  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.829854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.580276  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.451856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.679734  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.818759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.779391  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.624191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.880263  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.386136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:52.979532  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.629624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:53.072333  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:53.072389  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:53.072389  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:53.072981  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:53.075039  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:53.077303  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:53.079699  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.976495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:53.180124  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.079183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
E1010 14:24:53.279021  108280 factory.go:701] Error getting pod permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/signalling-pod for retry: Get http://127.0.0.1:45175/api/v1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/pods/signalling-pod: dial tcp 127.0.0.1:45175: connect: connection refused; retrying...
I1010 14:24:53.280383  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.56087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:53.379656  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.801336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:53.479928  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.149982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
E1010 14:24:53.539452  108280 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:37751/apis/events.k8s.io/v1beta1/namespaces/permit-plugin92594c78-ccf3-4bfc-b15b-a7a669e2c632/events: dial tcp 127.0.0.1:37751: connect: connection refused' (may retry after sleeping)
I1010 14:24:53.580082  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.253114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:53.679731  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.938932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:53.779716  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.88913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:53.879391  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.452024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:53.979562  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.606789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.072532  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:54.072547  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:54.072597  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:54.073180  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:54.075204  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:54.077441  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:54.079300  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.57722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.180240  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.232628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.279887  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.929857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.380275  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.257518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.479514  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.734593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.579922  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.083646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.680120  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.090547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.779735  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.680008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.874006  108280 httplog.go:90] GET /api/v1/namespaces/default: (2.632785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.876260  108280 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.640262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.878370  108280 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.45923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I1010 14:24:54.880351  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.621536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:54.979507  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.675229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.072764  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:55.072819  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:55.072837  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:55.073380  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:55.075416  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:55.077628  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:55.079755  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.964038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.179522  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.666571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.280115  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.213399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.379907  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.09331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.479727  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.776432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.579914  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.952511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.679629  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.79539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.780156  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.401944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.879271  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.49721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:55.979392  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.63164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.073050  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:56.073065  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:56.073112  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:56.073757  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:56.075709  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:56.077893  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:56.079965  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.177885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.179962  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.117328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.279870  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.106928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.380082  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.721743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.480967  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.507382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.579921  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.107697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.679805  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.942226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.781224  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.229614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.880040  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.112031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:56.979691  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.915162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.073231  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:57.073274  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:57.073245  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:57.073916  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:57.075948  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:57.078048  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:57.079987  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.188168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.180264  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.314897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.279788  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.829984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.379896  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.849219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.479590  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.820894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.579893  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.836214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.680046  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.148144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.779395  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.534895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.883187  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (4.384502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:57.980915  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.979287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.073353  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:58.073413  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:58.073462  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:58.074087  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:58.076156  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:58.078594  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:58.079986  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.016271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.179816  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.786844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.279728  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.898127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.379603  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.767401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.482422  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.740877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.580366  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.456689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.680108  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.310422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.779329  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.54731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.880121  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.192117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:58.980110  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.127144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.073523  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:59.073570  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:59.073867  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:59.074213  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:59.076428  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:59.078800  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:24:59.080897  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.050691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.180442  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.65289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.279975  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.160116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.379612  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.752584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.479675  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.763191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.580663  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.704623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.684116  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (6.340115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.780180  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.269935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.879833  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.957559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:24:59.979639  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.886688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:00.073698  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:00.073698  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:00.074074  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:00.074430  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:00.076605  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:00.079002  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:00.079918  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.167547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:00.180186  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.05736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:00.279721  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.899405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
E1010 14:25:00.371110  108280 factory.go:701] Error getting pod permit-plugin92594c78-ccf3-4bfc-b15b-a7a669e2c632/test-pod for retry: Get http://127.0.0.1:37751/api/v1/namespaces/permit-plugin92594c78-ccf3-4bfc-b15b-a7a669e2c632/pods/test-pod: dial tcp 127.0.0.1:37751: connect: connection refused; retrying...
I1010 14:25:00.380109  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.214028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:00.479412  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.677128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:00.579928  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.011367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:00.679657  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.861344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:00.781055  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.252425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:00.879653  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.815614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:00.979988  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.194169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:01.073900  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:01.074398  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:01.074570  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:01.074604  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:01.076888  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:01.079252  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:01.080158  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.19845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:01.180979  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.977496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:01.279640  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.849413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:01.381142  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.388127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:01.479553  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.708609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:01.580248  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.199569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:01.680240  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.316185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
E1010 14:25:01.767266  108280 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:45175/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7e0d3a8e-3642-42f1-a567-f75e176c9966/events: dial tcp 127.0.0.1:45175: connect: connection refused' (may retry after sleeping)
I1010 14:25:01.779638  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.745693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:01.885352  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (3.009152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:01.979731  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.889078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.074001  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:02.074656  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:02.074822  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:02.074874  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:02.077043  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:02.079514  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:02.080272  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.440953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.180204  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.415244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.279407  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.683647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.379642  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.851768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.479759  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.906554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.579635  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.720481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.679884  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.031772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.780149  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.160817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.879717  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.951085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:02.979668  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.896994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.074221  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:03.074834  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:03.075020  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:03.075051  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:03.077265  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:03.079431  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.759751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.079770  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:03.179932  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.093199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.279786  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.901258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.379945  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.102963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.479411  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.684016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.579986  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.236983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.680119  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.227857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.779903  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.122205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.879810  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.9599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:03.979932  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.094688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.074375  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:04.075060  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:04.075177  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:04.075221  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:04.077431  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:04.079721  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.978437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.079976  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
E1010 14:25:04.143323  108280 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:37751/apis/events.k8s.io/v1beta1/namespaces/permit-plugin92594c78-ccf3-4bfc-b15b-a7a669e2c632/events: dial tcp 127.0.0.1:37751: connect: connection refused' (may retry after sleeping)
I1010 14:25:04.179449  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.658139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.279449  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.703599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.379741  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.817958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.479558  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.812041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.579646  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.826882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.679661  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.81649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.780256  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (2.32771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.873266  108280 httplog.go:90] GET /api/v1/namespaces/default: (1.927911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.875423  108280 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.39059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.877066  108280 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.250235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.878968  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.201666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:04.979120  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.443699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:05.074786  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:05.075257  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:05.075288  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:05.075401  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:05.079331  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:05.079366  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.586831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:05.080161  108280 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1010 14:25:05.179534  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.855269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:05.181238  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.118837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:05.187067  108280 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (5.533801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:05.189531  108280 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure39a9c009-71dd-4dd1-8390-d6fc15888e0c/pods/pidpressure-fake-name: (1.039737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:05.190052  108280 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30659&timeoutSeconds=367&watch=true: (30.225998618s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I1010 14:25:05.190106  108280 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30658&timeout=5m49s&timeoutSeconds=349&watch=true: (30.12026613s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I1010 14:25:05.190255  108280 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30664&timeout=5m46s&timeoutSeconds=346&watch=true: (30.120058699s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I1010 14:25:05.190264  108280 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30664&timeout=7m26s&timeoutSeconds=446&watch=true: (30.120078329s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I1010 14:25:05.190381  108280 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30664&timeout=8m44s&timeoutSeconds=524&watch=true: (30.12270973s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I1010 14:25:05.190397  108280 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30658&timeout=9m57s&timeoutSeconds=597&watch=true: (30.121183108s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I1010 14:25:05.190400  108280 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30659&timeout=5m50s&timeoutSeconds=350&watch=true: (30.118630156s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I1010 14:25:05.190465  108280 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30663&timeout=6m6s&timeoutSeconds=366&watch=true: (30.120846081s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33928]
I1010 14:25:05.190489  108280 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30887&timeout=9m45s&timeoutSeconds=585&watch=true: (30.119404198s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33932]
I1010 14:25:05.190571  108280 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30664&timeout=7m41s&timeoutSeconds=461&watch=true: (30.121039487s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33926]
I1010 14:25:05.190581  108280 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30659&timeout=8m55s&timeoutSeconds=535&watch=true: (30.12000567s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33920]
I1010 14:25:05.194594  108280 httplog.go:90] DELETE /api/v1/nodes: (4.403594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:05.194870  108280 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1010 14:25:05.196526  108280 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.383353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
I1010 14:25:05.198603  108280 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.587811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33938]
--- FAIL: TestNodePIDPressure (34.03s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191010-141649.xml

Find permit-plugin92594c78-ccf3-4bfc-b15b-a7a669e2c632/test-pod mentions in log files | View test history on testgrid


Show 2898 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 835 lines ...
W1010 14:11:33.095] I1010 14:11:33.094428   53040 shared_informer.go:197] Waiting for caches to sync for attach detach
W1010 14:11:33.095] I1010 14:11:33.095260   53040 controllermanager.go:534] Started "deployment"
W1010 14:11:33.096] I1010 14:11:33.095817   53040 deployment_controller.go:152] Starting deployment controller
W1010 14:11:33.096] I1010 14:11:33.095884   53040 shared_informer.go:197] Waiting for caches to sync for deployment
W1010 14:11:33.097] I1010 14:11:33.097189   53040 controllermanager.go:534] Started "cronjob"
W1010 14:11:33.098] I1010 14:11:33.097235   53040 cronjob_controller.go:96] Starting CronJob Manager
W1010 14:11:33.099] E1010 14:11:33.098823   53040 core.go:79] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1010 14:11:33.099] W1010 14:11:33.098980   53040 controllermanager.go:526] Skipping "service"
W1010 14:11:33.100] I1010 14:11:33.100273   53040 controllermanager.go:534] Started "pv-protection"
W1010 14:11:33.101] I1010 14:11:33.100525   53040 pv_protection_controller.go:81] Starting PV protection controller
W1010 14:11:33.101] I1010 14:11:33.100560   53040 shared_informer.go:197] Waiting for caches to sync for PV protection
W1010 14:11:33.101] I1010 14:11:33.101358   53040 controllermanager.go:534] Started "endpoint"
W1010 14:11:33.102] W1010 14:11:33.101433   53040 controllermanager.go:513] "endpointslice" is disabled
... skipping 20 lines ...
W1010 14:11:33.515] I1010 14:11:33.514992   53040 controllermanager.go:534] Started "job"
W1010 14:11:33.516] I1010 14:11:33.516142   53040 controllermanager.go:534] Started "disruption"
W1010 14:11:33.517] I1010 14:11:33.517067   53040 controllermanager.go:534] Started "ttl"
W1010 14:11:33.518] I1010 14:11:33.517777   53040 node_lifecycle_controller.go:77] Sending events to api server
W1010 14:11:33.518] I1010 14:11:33.517987   53040 ttl_controller.go:116] Starting TTL controller
W1010 14:11:33.518] I1010 14:11:33.518027   53040 shared_informer.go:197] Waiting for caches to sync for TTL
W1010 14:11:33.518] E1010 14:11:33.518296   53040 core.go:202] failed to start cloud node lifecycle controller: no cloud provider provided
W1010 14:11:33.518] W1010 14:11:33.518453   53040 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W1010 14:11:33.519] I1010 14:11:33.517890   53040 job_controller.go:143] Starting job controller
W1010 14:11:33.519] I1010 14:11:33.518763   53040 shared_informer.go:197] Waiting for caches to sync for job
W1010 14:11:33.519] I1010 14:11:33.517910   53040 node_lifecycle_controller.go:497] Starting node controller
W1010 14:11:33.519] I1010 14:11:33.518814   53040 shared_informer.go:197] Waiting for caches to sync for taint
W1010 14:11:33.519] I1010 14:11:33.517918   53040 gc_controller.go:75] Starting GC controller
... skipping 62 lines ...
W1010 14:11:33.766] W1010 14:11:33.762645   53040 controllermanager.go:526] Skipping "ttl-after-finished"
W1010 14:11:33.767] I1010 14:11:33.762730   53040 stateful_set.go:145] Starting stateful set controller
W1010 14:11:33.767] I1010 14:11:33.762769   53040 shared_informer.go:197] Waiting for caches to sync for stateful set
W1010 14:11:33.767] I1010 14:11:33.763246   53040 controllermanager.go:534] Started "pvc-protection"
W1010 14:11:33.768] I1010 14:11:33.764774   53040 pvc_protection_controller.go:100] Starting PVC protection controller
W1010 14:11:33.768] I1010 14:11:33.765084   53040 shared_informer.go:197] Waiting for caches to sync for PVC protection
W1010 14:11:33.795] W1010 14:11:33.792165   53040 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1010 14:11:33.833] I1010 14:11:33.832881   53040 shared_informer.go:204] Caches are synced for TTL 
W1010 14:11:33.858] I1010 14:11:33.857396   53040 shared_informer.go:204] Caches are synced for namespace 
W1010 14:11:33.859] I1010 14:11:33.858494   53040 shared_informer.go:204] Caches are synced for HPA 
W1010 14:11:33.860] I1010 14:11:33.860164   53040 shared_informer.go:204] Caches are synced for certificate-csrapproving 
W1010 14:11:33.861] I1010 14:11:33.860767   53040 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W1010 14:11:33.861] I1010 14:11:33.861455   53040 shared_informer.go:204] Caches are synced for service account 
W1010 14:11:33.865] I1010 14:11:33.865496   53040 shared_informer.go:204] Caches are synced for PVC protection 
W1010 14:11:33.866] I1010 14:11:33.865949   49490 controller.go:606] quota admission added evaluator for: serviceaccounts
W1010 14:11:33.882] E1010 14:11:33.881754   53040 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1010 14:11:33.893] E1010 14:11:33.892829   53040 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1010 14:11:33.896] I1010 14:11:33.896056   53040 shared_informer.go:204] Caches are synced for deployment 
W1010 14:11:33.903] I1010 14:11:33.903163   53040 shared_informer.go:204] Caches are synced for endpoint 
W1010 14:11:33.904] I1010 14:11:33.903796   53040 shared_informer.go:204] Caches are synced for ReplicationController 
W1010 14:11:33.908] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W1010 14:11:33.919] I1010 14:11:33.918981   53040 shared_informer.go:204] Caches are synced for GC 
W1010 14:11:33.920] I1010 14:11:33.919438   53040 shared_informer.go:204] Caches are synced for job 
... skipping 96 lines ...
I1010 14:11:38.756] +++ working dir: /go/src/k8s.io/kubernetes
I1010 14:11:38.760] +++ command: run_RESTMapper_evaluation_tests
I1010 14:11:38.780] +++ [1010 14:11:38] Creating namespace namespace-1570716698-16592
I1010 14:11:38.885] namespace/namespace-1570716698-16592 created
I1010 14:11:39.016] Context "test" modified.
I1010 14:11:39.026] +++ [1010 14:11:39] Testing RESTMapper
I1010 14:11:39.174] +++ [1010 14:11:39] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1010 14:11:39.196] +++ exit code: 0
I1010 14:11:39.365] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1010 14:11:39.366] bindings                                                                      true         Binding
I1010 14:11:39.366] componentstatuses                 cs                                          false        ComponentStatus
I1010 14:11:39.366] configmaps                        cm                                          true         ConfigMap
I1010 14:11:39.367] endpoints                         ep                                          true         Endpoints
... skipping 317 lines ...
I1010 14:11:57.508] (Bcore.sh:79: Successful get pods/valid-pod {{.metadata.name}}: valid-pod
I1010 14:11:57.638] (Bcore.sh:81: Successful get pods {.items[*].metadata.name}: valid-pod
I1010 14:11:57.758] (Bcore.sh:82: Successful get pod valid-pod {.metadata.name}: valid-pod
I1010 14:11:57.876] (Bcore.sh:83: Successful get pod/valid-pod {.metadata.name}: valid-pod
I1010 14:11:57.995] (Bcore.sh:84: Successful get pods/valid-pod {.metadata.name}: valid-pod
I1010 14:11:58.127] (B
I1010 14:11:58.133] core.sh:86: FAIL!
I1010 14:11:58.134] Describe pods valid-pod
I1010 14:11:58.134]   Expected Match: Name:
I1010 14:11:58.134]   Not found in:
I1010 14:11:58.134] Name:         valid-pod
I1010 14:11:58.134] Namespace:    namespace-1570716716-7661
I1010 14:11:58.135] Priority:     0
... skipping 108 lines ...
I1010 14:11:58.563] QoS Class:        Guaranteed
I1010 14:11:58.563] Node-Selectors:   <none>
I1010 14:11:58.563] Tolerations:      <none>
I1010 14:11:58.563] Events:           <none>
I1010 14:11:58.563] (B
I1010 14:11:58.699] 
I1010 14:11:58.700] FAIL!
I1010 14:11:58.700] Describe pods
I1010 14:11:58.700]   Expected Match: Name:
I1010 14:11:58.700]   Not found in:
I1010 14:11:58.700] Name:         valid-pod
I1010 14:11:58.700] Namespace:    namespace-1570716716-7661
I1010 14:11:58.700] Priority:     0
... skipping 158 lines ...
I1010 14:12:04.653] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:12:04.895] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:12:05.032] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:12:05.317] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:12:05.451] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:12:05.569] (Bpod "valid-pod" force deleted
W1010 14:12:05.670] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1010 14:12:05.671] error: setting 'all' parameter but found a non empty selector. 
W1010 14:12:05.671] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1010 14:12:05.772] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:12:05.841] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I1010 14:12:05.939] (Bnamespace/test-kubectl-describe-pod created
I1010 14:12:06.072] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I1010 14:12:06.199] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I1010 14:12:07.551] (Bpoddisruptionbudget.policy/test-pdb-3 created
I1010 14:12:07.693] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1010 14:12:07.809] (Bpoddisruptionbudget.policy/test-pdb-4 created
I1010 14:12:07.957] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1010 14:12:08.181] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:12:08.445] (Bpod/env-test-pod created
W1010 14:12:08.545] error: min-available and max-unavailable cannot be both specified
I1010 14:12:08.646] 
I1010 14:12:08.646] core.sh:264: FAIL!
I1010 14:12:08.647] Describe pods --namespace=test-kubectl-describe-pod env-test-pod
I1010 14:12:08.647]   Expected Match: TEST_CMD_1
I1010 14:12:08.647]   Not found in:
I1010 14:12:08.647] Name:         env-test-pod
I1010 14:12:08.647] Namespace:    test-kubectl-describe-pod
I1010 14:12:08.647] Priority:     0
... skipping 23 lines ...
I1010 14:12:08.651] Tolerations:       <none>
I1010 14:12:08.651] Events:            <none>
I1010 14:12:08.651] (B
I1010 14:12:08.651] 264 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1010 14:12:08.651] (B
I1010 14:12:08.733] 
I1010 14:12:08.734] FAIL!
I1010 14:12:08.734] Describe pods --namespace=test-kubectl-describe-pod
I1010 14:12:08.734]   Expected Match: TEST_CMD_1
I1010 14:12:08.734]   Not found in:
I1010 14:12:08.734] Name:         env-test-pod
I1010 14:12:08.735] Namespace:    test-kubectl-describe-pod
I1010 14:12:08.735] Priority:     0
... skipping 150 lines ...
I1010 14:12:25.693] (Bpod/valid-pod patched
I1010 14:12:25.830] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1010 14:12:25.940] (Bpod/valid-pod patched
I1010 14:12:26.079] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1010 14:12:26.303] (Bpod/valid-pod patched
I1010 14:12:26.456] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1010 14:12:26.712] (B+++ [1010 14:12:26] "kubectl patch with resourceVersion 508" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I1010 14:12:27.078] pod "valid-pod" deleted
I1010 14:12:27.094] pod/valid-pod replaced
I1010 14:12:27.239] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1010 14:12:27.492] (BSuccessful
I1010 14:12:27.493] message:error: --grace-period must have --force specified
I1010 14:12:27.493] has:\-\-grace-period must have \-\-force specified
I1010 14:12:27.744] Successful
I1010 14:12:27.745] message:error: --timeout must have --force specified
I1010 14:12:27.745] has:\-\-timeout must have \-\-force specified
W1010 14:12:27.986] W1010 14:12:27.985387   53040 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1010 14:12:28.089] node/node-v1-test created
I1010 14:12:28.216] node/node-v1-test replaced
I1010 14:12:28.369] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1010 14:12:28.481] (Bnode "node-v1-test" deleted
I1010 14:12:28.637] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1010 14:12:29.052] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 26 lines ...
I1010 14:12:30.765]     name: kubernetes-pause
I1010 14:12:30.766] has:localonlyvalue
I1010 14:12:30.828] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1010 14:12:31.074] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1010 14:12:31.209] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1010 14:12:31.322] (Bpod/valid-pod labeled
W1010 14:12:31.423] error: 'name' already has a value (valid-pod), and --overwrite is false
I1010 14:12:31.524] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I1010 14:12:31.596] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:12:31.706] (Bpod "valid-pod" force deleted
W1010 14:12:31.807] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1010 14:12:31.908] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:12:31.909] (B+++ [1010 14:12:31] Creating namespace namespace-1570716751-21353
... skipping 82 lines ...
I1010 14:12:43.076] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1010 14:12:43.090] +++ working dir: /go/src/k8s.io/kubernetes
I1010 14:12:43.090] +++ command: run_kubectl_create_error_tests
I1010 14:12:43.104] +++ [1010 14:12:43] Creating namespace namespace-1570716763-14212
I1010 14:12:43.236] namespace/namespace-1570716763-14212 created
I1010 14:12:43.363] Context "test" modified.
I1010 14:12:43.381] +++ [1010 14:12:43] Testing kubectl create with error
W1010 14:12:43.482] Error: must specify one of -f and -k
W1010 14:12:43.483] 
W1010 14:12:43.483] Create a resource from a file or from stdin.
W1010 14:12:43.483] 
W1010 14:12:43.484]  JSON and YAML formats are accepted.
W1010 14:12:43.484] 
W1010 14:12:43.484] Examples:
... skipping 41 lines ...
W1010 14:12:43.492] 
W1010 14:12:43.492] Usage:
W1010 14:12:43.492]   kubectl create -f FILENAME [options]
W1010 14:12:43.493] 
W1010 14:12:43.493] Use "kubectl <command> --help" for more information about a given command.
W1010 14:12:43.493] Use "kubectl options" for a list of global command-line options (applies to all commands).
I1010 14:12:43.779] +++ [1010 14:12:43] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1010 14:12:43.892] kubectl convert is DEPRECATED and will be removed in a future version.
W1010 14:12:43.892] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1010 14:12:44.062] +++ exit code: 0
I1010 14:12:44.113] Recording: run_kubectl_apply_tests
I1010 14:12:44.114] Running command: run_kubectl_apply_tests
I1010 14:12:44.148] 
... skipping 17 lines ...
I1010 14:12:46.618] (Bpod "test-pod" deleted
I1010 14:12:46.946] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W1010 14:12:47.374] I1010 14:12:47.373866   49490 client.go:361] parsed scheme: "endpoint"
W1010 14:12:47.375] I1010 14:12:47.374027   49490 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1010 14:12:47.382] I1010 14:12:47.381925   49490 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I1010 14:12:47.483] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W1010 14:12:47.584] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1010 14:12:47.685] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1010 14:12:47.689] +++ exit code: 0
I1010 14:12:47.747] Recording: run_kubectl_run_tests
I1010 14:12:47.748] Running command: run_kubectl_run_tests
I1010 14:12:47.783] 
I1010 14:12:47.787] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 7 lines ...
I1010 14:12:48.331] (Bjob.batch/pi created
W1010 14:12:48.432] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1010 14:12:48.433] I1010 14:12:48.319231   49490 controller.go:606] quota admission added evaluator for: jobs.batch
W1010 14:12:48.433] I1010 14:12:48.341260   53040 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1570716767-20639", Name:"pi", UID:"bdf9d4fc-c5c9-4e81-8f04-fe63c5b4dfe8", APIVersion:"batch/v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-x82tz
I1010 14:12:48.534] run.sh:33: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: pi:
I1010 14:12:48.606] (B
I1010 14:12:48.607] FAIL!
I1010 14:12:48.607] Describe pods
I1010 14:12:48.607]   Expected Match: Name:
I1010 14:12:48.608]   Not found in:
I1010 14:12:48.608] Name:           pi-x82tz
I1010 14:12:48.608] Namespace:      namespace-1570716767-20639
I1010 14:12:48.608] Priority:       0
... skipping 83 lines ...
I1010 14:12:51.293] Context "test" modified.
I1010 14:12:51.303] +++ [1010 14:12:51] Testing kubectl create filter
I1010 14:12:51.414] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:12:51.668] (Bpod/selector-test-pod created
I1010 14:12:51.785] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1010 14:12:51.899] (BSuccessful
I1010 14:12:51.900] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1010 14:12:51.900] has:pods "selector-test-pod-dont-apply" not found
I1010 14:12:52.007] pod "selector-test-pod" deleted
I1010 14:12:52.038] +++ exit code: 0
I1010 14:12:52.080] Recording: run_kubectl_apply_deployments_tests
I1010 14:12:52.080] Running command: run_kubectl_apply_deployments_tests
I1010 14:12:52.106] 
... skipping 29 lines ...
W1010 14:12:54.620] I1010 14:12:54.522228   53040 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1570716772-7056", Name:"nginx", UID:"edc9fe8e-11c2-48db-bb7a-7deb096366fe", APIVersion:"apps/v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W1010 14:12:54.620] I1010 14:12:54.525567   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716772-7056", Name:"nginx-8484dd655", UID:"1c177384-57af-410f-b433-c64952402993", APIVersion:"apps/v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-5hmlq
W1010 14:12:54.621] I1010 14:12:54.528356   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716772-7056", Name:"nginx-8484dd655", UID:"1c177384-57af-410f-b433-c64952402993", APIVersion:"apps/v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-kfczm
W1010 14:12:54.621] I1010 14:12:54.528427   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716772-7056", Name:"nginx-8484dd655", UID:"1c177384-57af-410f-b433-c64952402993", APIVersion:"apps/v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-vjw4g
I1010 14:12:54.722] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I1010 14:12:58.872] (BSuccessful
I1010 14:12:58.872] message:Error from server (Conflict): error when applying patch:
I1010 14:12:58.873] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1570716772-7056\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1010 14:12:58.873] to:
I1010 14:12:58.873] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I1010 14:12:58.874] Name: "nginx", Namespace: "namespace-1570716772-7056"
I1010 14:12:58.876] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1570716772-7056\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-10-10T14:12:54Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1570716772-7056" "resourceVersion":"607" "selfLink":"/apis/apps/v1/namespaces/namespace-1570716772-7056/deployments/nginx" "uid":"edc9fe8e-11c2-48db-bb7a-7deb096366fe"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-10-10T14:12:54Z" "lastUpdateTime":"2019-10-10T14:12:54Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-10-10T14:12:54Z" "lastUpdateTime":"2019-10-10T14:12:54Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I1010 14:12:58.876] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I1010 14:12:58.876] has:Error from server (Conflict)
W1010 14:12:58.977] I1010 14:12:57.065819   53040 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1570716759-9543
I1010 14:13:04.151] deployment.apps/nginx configured
W1010 14:13:04.252] I1010 14:13:04.154818   53040 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1570716772-7056", Name:"nginx", UID:"71564e03-5fef-49e4-a843-8daf3fc61a97", APIVersion:"apps/v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W1010 14:13:04.253] I1010 14:13:04.159089   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716772-7056", Name:"nginx-668b6c7744", UID:"b76003e6-95a8-4d45-ad04-a5b5c4d36bf9", APIVersion:"apps/v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-94p2c
W1010 14:13:04.253] I1010 14:13:04.161713   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716772-7056", Name:"nginx-668b6c7744", UID:"b76003e6-95a8-4d45-ad04-a5b5c4d36bf9", APIVersion:"apps/v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-8z48z
W1010 14:13:04.253] I1010 14:13:04.162153   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716772-7056", Name:"nginx-668b6c7744", UID:"b76003e6-95a8-4d45-ad04-a5b5c4d36bf9", APIVersion:"apps/v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-95tpn
... skipping 142 lines ...
I1010 14:13:11.722] +++ [1010 14:13:11] Creating namespace namespace-1570716791-26298
I1010 14:13:11.800] namespace/namespace-1570716791-26298 created
I1010 14:13:11.877] Context "test" modified.
I1010 14:13:11.886] +++ [1010 14:13:11] Testing kubectl get
I1010 14:13:12.008] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:12.123] (BSuccessful
I1010 14:13:12.123] message:Error from server (NotFound): pods "abc" not found
I1010 14:13:12.123] has:pods "abc" not found
I1010 14:13:12.227] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:12.323] (BSuccessful
I1010 14:13:12.323] message:Error from server (NotFound): pods "abc" not found
I1010 14:13:12.324] has:pods "abc" not found
I1010 14:13:12.427] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:12.532] (BSuccessful
I1010 14:13:12.532] message:{
I1010 14:13:12.532]     "apiVersion": "v1",
I1010 14:13:12.532]     "items": [],
... skipping 23 lines ...
I1010 14:13:12.948] has not:No resources found
I1010 14:13:13.047] Successful
I1010 14:13:13.048] message:NAME
I1010 14:13:13.048] has not:No resources found
I1010 14:13:13.156] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:13.281] (BSuccessful
I1010 14:13:13.281] message:error: the server doesn't have a resource type "foobar"
I1010 14:13:13.281] has not:No resources found
I1010 14:13:13.366] Successful
I1010 14:13:13.366] message:No resources found in namespace-1570716791-26298 namespace.
I1010 14:13:13.367] has:No resources found
I1010 14:13:13.448] Successful
I1010 14:13:13.449] message:
I1010 14:13:13.449] has not:No resources found
I1010 14:13:13.543] Successful
I1010 14:13:13.544] message:No resources found in namespace-1570716791-26298 namespace.
I1010 14:13:13.544] has:No resources found
I1010 14:13:13.644] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:13.736] (BSuccessful
I1010 14:13:13.736] message:Error from server (NotFound): pods "abc" not found
I1010 14:13:13.736] has:pods "abc" not found
I1010 14:13:13.738] FAIL!
I1010 14:13:13.738] message:Error from server (NotFound): pods "abc" not found
I1010 14:13:13.738] has not:List
I1010 14:13:13.738] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1010 14:13:13.861] Successful
I1010 14:13:13.862] message:I1010 14:13:13.805039   62652 loader.go:375] Config loaded from file:  /tmp/tmp.u3zMU9pmS4/.kube/config
I1010 14:13:13.863] I1010 14:13:13.806571   62652 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I1010 14:13:13.863] I1010 14:13:13.833440   62652 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I1010 14:13:19.457] Successful
I1010 14:13:19.458] message:NAME    DATA   AGE
I1010 14:13:19.458] one     0      0s
I1010 14:13:19.458] three   0      0s
I1010 14:13:19.458] two     0      0s
I1010 14:13:19.458] STATUS    REASON          MESSAGE
I1010 14:13:19.459] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1010 14:13:19.459] has not:watch is only supported on individual resources
I1010 14:13:20.578] Successful
I1010 14:13:20.579] message:STATUS    REASON          MESSAGE
I1010 14:13:20.579] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1010 14:13:20.579] has not:watch is only supported on individual resources
I1010 14:13:20.585] +++ [1010 14:13:20] Creating namespace namespace-1570716800-30512
I1010 14:13:20.657] namespace/namespace-1570716800-30512 created
I1010 14:13:20.733] Context "test" modified.
I1010 14:13:20.831] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:21.003] (Bpod/valid-pod created
... skipping 56 lines ...
I1010 14:13:21.102] }
I1010 14:13:21.195] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:13:21.441] (B<no value>Successful
I1010 14:13:21.442] message:valid-pod:
I1010 14:13:21.442] has:valid-pod:
I1010 14:13:21.531] Successful
I1010 14:13:21.532] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1010 14:13:21.532] 	template was:
I1010 14:13:21.532] 		{.missing}
I1010 14:13:21.532] 	object given to jsonpath engine was:
I1010 14:13:21.533] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-10-10T14:13:21Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1570716800-30512", "resourceVersion":"709", "selfLink":"/api/v1/namespaces/namespace-1570716800-30512/pods/valid-pod", "uid":"27501118-86ac-459d-93d3-e1da3add12c6"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I1010 14:13:21.533] has:missing is not found
I1010 14:13:21.616] Successful
I1010 14:13:21.617] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1010 14:13:21.617] 	template was:
I1010 14:13:21.617] 		{{.missing}}
I1010 14:13:21.617] 	raw data was:
I1010 14:13:21.618] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-10-10T14:13:21Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1570716800-30512","resourceVersion":"709","selfLink":"/api/v1/namespaces/namespace-1570716800-30512/pods/valid-pod","uid":"27501118-86ac-459d-93d3-e1da3add12c6"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1010 14:13:21.618] 	object given to template engine was:
I1010 14:13:21.619] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-10-10T14:13:21Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1570716800-30512 resourceVersion:709 selfLink:/api/v1/namespaces/namespace-1570716800-30512/pods/valid-pod uid:27501118-86ac-459d-93d3-e1da3add12c6] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I1010 14:13:21.619] has:map has no entry for key "missing"
W1010 14:13:21.719] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I1010 14:13:22.703] Successful
I1010 14:13:22.703] message:NAME        READY   STATUS    RESTARTS   AGE
I1010 14:13:22.704] valid-pod   0/1     Pending   0          0s
I1010 14:13:22.704] STATUS      REASON          MESSAGE
I1010 14:13:22.704] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1010 14:13:22.704] has:STATUS
I1010 14:13:22.706] Successful
I1010 14:13:22.706] message:NAME        READY   STATUS    RESTARTS   AGE
I1010 14:13:22.706] valid-pod   0/1     Pending   0          0s
I1010 14:13:22.706] STATUS      REASON          MESSAGE
I1010 14:13:22.706] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1010 14:13:22.707] has:valid-pod
I1010 14:13:23.805] Successful
I1010 14:13:23.806] message:pod/valid-pod
I1010 14:13:23.806] has not:STATUS
I1010 14:13:23.808] Successful
I1010 14:13:23.809] message:pod/valid-pod
... skipping 72 lines ...
I1010 14:13:24.904] status:
I1010 14:13:24.904]   phase: Pending
I1010 14:13:24.904]   qosClass: Guaranteed
I1010 14:13:24.904] ---
I1010 14:13:24.904] has:name: valid-pod
I1010 14:13:24.995] Successful
I1010 14:13:24.995] message:Error from server (NotFound): pods "invalid-pod" not found
I1010 14:13:24.995] has:"invalid-pod" not found
I1010 14:13:25.080] pod "valid-pod" deleted
I1010 14:13:25.184] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:25.351] (Bpod/redis-master created
I1010 14:13:25.355] pod/valid-pod created
I1010 14:13:25.451] Successful
... skipping 35 lines ...
I1010 14:13:26.645] +++ command: run_kubectl_exec_pod_tests
I1010 14:13:26.659] +++ [1010 14:13:26] Creating namespace namespace-1570716806-5280
I1010 14:13:26.736] namespace/namespace-1570716806-5280 created
I1010 14:13:26.806] Context "test" modified.
I1010 14:13:26.816] +++ [1010 14:13:26] Testing kubectl exec POD COMMAND
I1010 14:13:26.902] Successful
I1010 14:13:26.902] message:Error from server (NotFound): pods "abc" not found
I1010 14:13:26.902] has:pods "abc" not found
I1010 14:13:27.077] pod/test-pod created
I1010 14:13:27.200] Successful
I1010 14:13:27.201] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1010 14:13:27.201] has not:pods "test-pod" not found
I1010 14:13:27.203] Successful
I1010 14:13:27.203] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1010 14:13:27.203] has not:pod or type/name must be specified
I1010 14:13:27.287] pod "test-pod" deleted
I1010 14:13:27.313] +++ exit code: 0
I1010 14:13:27.577] Recording: run_kubectl_exec_resource_name_tests
I1010 14:13:27.578] Running command: run_kubectl_exec_resource_name_tests
I1010 14:13:27.608] 
... skipping 2 lines ...
I1010 14:13:27.618] +++ command: run_kubectl_exec_resource_name_tests
I1010 14:13:27.635] +++ [1010 14:13:27] Creating namespace namespace-1570716807-17313
I1010 14:13:27.714] namespace/namespace-1570716807-17313 created
I1010 14:13:27.800] Context "test" modified.
I1010 14:13:27.809] +++ [1010 14:13:27] Testing kubectl exec TYPE/NAME COMMAND
I1010 14:13:27.917] Successful
I1010 14:13:27.917] message:error: the server doesn't have a resource type "foo"
I1010 14:13:27.918] has:error:
I1010 14:13:28.009] Successful
I1010 14:13:28.010] message:Error from server (NotFound): deployments.apps "bar" not found
I1010 14:13:28.010] has:"bar" not found
I1010 14:13:28.185] pod/test-pod created
I1010 14:13:28.381] replicaset.apps/frontend created
W1010 14:13:28.482] I1010 14:13:28.384432   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716807-17313", Name:"frontend", UID:"72fbbe88-1b9b-4e66-9be4-3994bc0e5d19", APIVersion:"apps/v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dc6mq
W1010 14:13:28.482] I1010 14:13:28.387523   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716807-17313", Name:"frontend", UID:"72fbbe88-1b9b-4e66-9be4-3994bc0e5d19", APIVersion:"apps/v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mxp82
W1010 14:13:28.483] I1010 14:13:28.388835   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716807-17313", Name:"frontend", UID:"72fbbe88-1b9b-4e66-9be4-3994bc0e5d19", APIVersion:"apps/v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8srpc
I1010 14:13:28.583] configmap/test-set-env-config created
I1010 14:13:28.662] Successful
I1010 14:13:28.662] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I1010 14:13:28.662] has:not implemented
I1010 14:13:28.756] Successful
I1010 14:13:28.756] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1010 14:13:28.756] has not:not found
I1010 14:13:28.758] Successful
I1010 14:13:28.758] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1010 14:13:28.758] has not:pod or type/name must be specified
I1010 14:13:28.871] Successful
I1010 14:13:28.872] message:Error from server (BadRequest): pod frontend-8srpc does not have a host assigned
I1010 14:13:28.872] has not:not found
I1010 14:13:28.873] Successful
I1010 14:13:28.874] message:Error from server (BadRequest): pod frontend-8srpc does not have a host assigned
I1010 14:13:28.874] has not:pod or type/name must be specified
I1010 14:13:28.952] pod "test-pod" deleted
I1010 14:13:29.038] replicaset.apps "frontend" deleted
I1010 14:13:29.122] configmap "test-set-env-config" deleted
I1010 14:13:29.147] +++ exit code: 0
I1010 14:13:29.189] Recording: run_create_secret_tests
I1010 14:13:29.189] Running command: run_create_secret_tests
I1010 14:13:29.218] 
I1010 14:13:29.221] +++ Running case: test-cmd.run_create_secret_tests 
I1010 14:13:29.225] +++ working dir: /go/src/k8s.io/kubernetes
I1010 14:13:29.228] +++ command: run_create_secret_tests
I1010 14:13:29.321] Successful
I1010 14:13:29.322] message:Error from server (NotFound): secrets "mysecret" not found
I1010 14:13:29.322] has:secrets "mysecret" not found
I1010 14:13:29.479] Successful
I1010 14:13:29.479] message:Error from server (NotFound): secrets "mysecret" not found
I1010 14:13:29.480] has:secrets "mysecret" not found
I1010 14:13:29.482] Successful
I1010 14:13:29.482] message:user-specified
I1010 14:13:29.483] has:user-specified
I1010 14:13:29.558] Successful
I1010 14:13:29.649] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"5f0f612f-b5e9-4b7e-923f-ffcac69eef5d","resourceVersion":"784","creationTimestamp":"2019-10-10T14:13:29Z"}}
... skipping 2 lines ...
I1010 14:13:29.833] has:uid
I1010 14:13:29.909] Successful
I1010 14:13:29.909] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"5f0f612f-b5e9-4b7e-923f-ffcac69eef5d","resourceVersion":"785","creationTimestamp":"2019-10-10T14:13:29Z"},"data":{"key1":"config1"}}
I1010 14:13:29.910] has:config1
I1010 14:13:29.977] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"5f0f612f-b5e9-4b7e-923f-ffcac69eef5d"}}
I1010 14:13:30.071] Successful
I1010 14:13:30.072] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I1010 14:13:30.072] has:configmaps "tester-update-cm" not found
I1010 14:13:30.087] +++ exit code: 0
I1010 14:13:30.128] Recording: run_kubectl_create_kustomization_directory_tests
I1010 14:13:30.128] Running command: run_kubectl_create_kustomization_directory_tests
I1010 14:13:30.156] 
I1010 14:13:30.158] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I1010 14:13:32.965] valid-pod   0/1     Pending   0          0s
I1010 14:13:32.965] has:valid-pod
I1010 14:13:34.059] Successful
I1010 14:13:34.059] message:NAME        READY   STATUS    RESTARTS   AGE
I1010 14:13:34.060] valid-pod   0/1     Pending   0          1s
I1010 14:13:34.060] STATUS      REASON          MESSAGE
I1010 14:13:34.060] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1010 14:13:34.060] has:Timeout exceeded while reading body
I1010 14:13:34.146] Successful
I1010 14:13:34.147] message:NAME        READY   STATUS    RESTARTS   AGE
I1010 14:13:34.147] valid-pod   0/1     Pending   0          2s
I1010 14:13:34.147] has:valid-pod
I1010 14:13:34.229] Successful
I1010 14:13:34.230] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1010 14:13:34.230] has:Invalid timeout value
I1010 14:13:34.314] pod "valid-pod" deleted
I1010 14:13:34.346] +++ exit code: 0
I1010 14:13:34.392] Recording: run_crd_tests
I1010 14:13:34.392] Running command: run_crd_tests
I1010 14:13:34.421] 
... skipping 168 lines ...
I1010 14:13:39.300] foo.company.com/test patched
I1010 14:13:39.409] crd.sh:236: Successful get foos/test {{.patched}}: value1
I1010 14:13:39.572] (Bfoo.company.com/test patched
I1010 14:13:39.676] crd.sh:238: Successful get foos/test {{.patched}}: value2
I1010 14:13:39.763] (Bfoo.company.com/test patched
I1010 14:13:39.875] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I1010 14:13:40.036] (B+++ [1010 14:13:40] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1010 14:13:40.097] {
I1010 14:13:40.097]     "apiVersion": "company.com/v1",
I1010 14:13:40.097]     "kind": "Foo",
I1010 14:13:40.098]     "metadata": {
I1010 14:13:40.098]         "annotations": {
I1010 14:13:40.098]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 181 lines ...
I1010 14:13:48.347] bar.company.com/test created
I1010 14:13:48.452] crd.sh:455: Successful get bars {{len .items}}: 1
I1010 14:13:48.532] (Bnamespace "non-native-resources" deleted
I1010 14:13:53.742] crd.sh:458: Successful get bars {{len .items}}: 0
I1010 14:13:53.903] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I1010 14:13:54.003] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
W1010 14:13:54.104] Error from server (NotFound): namespaces "non-native-resources" not found
I1010 14:13:54.204] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1010 14:13:54.211] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I1010 14:13:54.242] +++ exit code: 0
I1010 14:13:54.286] Recording: run_cmd_with_img_tests
I1010 14:13:54.287] Running command: run_cmd_with_img_tests
I1010 14:13:54.316] 
... skipping 9 lines ...
W1010 14:13:54.623] I1010 14:13:54.615750   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716834-30492", Name:"test1-6cdffdb5b8", UID:"0a252713-57fb-4651-9763-4f66dfecbfbe", APIVersion:"apps/v1", ResourceVersion:"931", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-c625d
I1010 14:13:54.724] Successful
I1010 14:13:54.725] message:deployment.apps/test1 created
I1010 14:13:54.725] has:deployment.apps/test1 created
I1010 14:13:54.725] deployment.apps "test1" deleted
I1010 14:13:54.791] Successful
I1010 14:13:54.791] message:error: Invalid image name "InvalidImageName": invalid reference format
I1010 14:13:54.791] has:error: Invalid image name "InvalidImageName": invalid reference format
I1010 14:13:54.807] +++ exit code: 0
I1010 14:13:54.854] +++ [1010 14:13:54] Testing recursive resources
I1010 14:13:54.863] +++ [1010 14:13:54] Creating namespace namespace-1570716834-32152
I1010 14:13:54.941] namespace/namespace-1570716834-32152 created
I1010 14:13:55.014] Context "test" modified.
I1010 14:13:55.110] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:55.446] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:55.448] (BSuccessful
I1010 14:13:55.449] message:pod/busybox0 created
I1010 14:13:55.449] pod/busybox1 created
I1010 14:13:55.449] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1010 14:13:55.450] has:error validating data: kind not set
I1010 14:13:55.550] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:55.730] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1010 14:13:55.733] (BSuccessful
I1010 14:13:55.734] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:55.734] has:Object 'Kind' is missing
I1010 14:13:55.838] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:56.192] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1010 14:13:56.195] (BSuccessful
I1010 14:13:56.195] message:pod/busybox0 replaced
I1010 14:13:56.195] pod/busybox1 replaced
I1010 14:13:56.195] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1010 14:13:56.195] has:error validating data: kind not set
I1010 14:13:56.298] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:56.405] (BSuccessful
I1010 14:13:56.405] message:Name:         busybox0
I1010 14:13:56.405] Namespace:    namespace-1570716834-32152
I1010 14:13:56.405] Priority:     0
I1010 14:13:56.405] Node:         <none>
... skipping 159 lines ...
I1010 14:13:56.428] has:Object 'Kind' is missing
I1010 14:13:56.517] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:56.728] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1010 14:13:56.733] (BSuccessful
I1010 14:13:56.733] message:pod/busybox0 annotated
I1010 14:13:56.733] pod/busybox1 annotated
I1010 14:13:56.733] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:56.734] has:Object 'Kind' is missing
I1010 14:13:56.833] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:57.168] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1010 14:13:57.171] (BSuccessful
I1010 14:13:57.172] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1010 14:13:57.172] pod/busybox0 configured
I1010 14:13:57.172] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1010 14:13:57.172] pod/busybox1 configured
I1010 14:13:57.172] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1010 14:13:57.172] has:error validating data: kind not set
W1010 14:13:57.273] W1010 14:13:54.916678   49490 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1010 14:13:57.273] E1010 14:13:54.918189   53040 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.274] W1010 14:13:55.010534   49490 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1010 14:13:57.274] E1010 14:13:55.012243   53040 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.274] W1010 14:13:55.116444   49490 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1010 14:13:57.274] E1010 14:13:55.118319   53040 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.274] W1010 14:13:55.219205   49490 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1010 14:13:57.275] E1010 14:13:55.220771   53040 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.275] E1010 14:13:55.919612   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.275] E1010 14:13:56.013624   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.275] E1010 14:13:56.122288   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.275] E1010 14:13:56.222384   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.276] E1010 14:13:56.921103   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.276] E1010 14:13:57.015238   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.276] E1010 14:13:57.124062   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:57.277] E1010 14:13:57.225298   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:13:57.377] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:57.466] (Bdeployment.apps/nginx created
W1010 14:13:57.566] I1010 14:13:57.469724   53040 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1570716834-32152", Name:"nginx", UID:"b78c4722-b0f0-4111-87ab-9e3cbb65b596", APIVersion:"apps/v1", ResourceVersion:"957", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1010 14:13:57.567] I1010 14:13:57.472462   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716834-32152", Name:"nginx-f87d999f7", UID:"2208c5d8-f6ad-46ec-a066-609cea709235", APIVersion:"apps/v1", ResourceVersion:"958", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-gw4db
W1010 14:13:57.567] I1010 14:13:57.478995   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716834-32152", Name:"nginx-f87d999f7", UID:"2208c5d8-f6ad-46ec-a066-609cea709235", APIVersion:"apps/v1", ResourceVersion:"958", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-5sx4m
W1010 14:13:57.568] I1010 14:13:57.479544   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716834-32152", Name:"nginx-f87d999f7", UID:"2208c5d8-f6ad-46ec-a066-609cea709235", APIVersion:"apps/v1", ResourceVersion:"958", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-2wnj8
... skipping 43 lines ...
I1010 14:13:57.874]       terminationGracePeriodSeconds: 30
I1010 14:13:57.874] status: {}
I1010 14:13:57.874] has:extensions/v1beta1
I1010 14:13:57.951] deployment.apps "nginx" deleted
W1010 14:13:58.051] kubectl convert is DEPRECATED and will be removed in a future version.
W1010 14:13:58.052] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1010 14:13:58.052] E1010 14:13:57.922682   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:58.052] E1010 14:13:58.016960   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:58.126] E1010 14:13:58.125677   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:13:58.227] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:58.243] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:58.246] (BSuccessful
I1010 14:13:58.246] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1010 14:13:58.246] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1010 14:13:58.247] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:58.247] has:Object 'Kind' is missing
I1010 14:13:58.347] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:58.441] (BSuccessful
I1010 14:13:58.442] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:58.442] has:busybox0:busybox1:
I1010 14:13:58.444] Successful
I1010 14:13:58.445] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:58.445] has:Object 'Kind' is missing
I1010 14:13:58.547] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:58.647] (Bpod/busybox0 labeled
I1010 14:13:58.647] pod/busybox1 labeled
I1010 14:13:58.647] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:58.751] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1010 14:13:58.754] (BSuccessful
I1010 14:13:58.755] message:pod/busybox0 labeled
I1010 14:13:58.755] pod/busybox1 labeled
I1010 14:13:58.755] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:58.755] has:Object 'Kind' is missing
I1010 14:13:58.858] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:58.950] (Bpod/busybox0 patched
I1010 14:13:58.951] pod/busybox1 patched
I1010 14:13:58.951] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:59.053] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1010 14:13:59.055] (BSuccessful
I1010 14:13:59.056] message:pod/busybox0 patched
I1010 14:13:59.056] pod/busybox1 patched
I1010 14:13:59.056] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:59.057] has:Object 'Kind' is missing
I1010 14:13:59.153] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:59.345] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:59.349] (BSuccessful
I1010 14:13:59.350] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1010 14:13:59.350] pod "busybox0" force deleted
I1010 14:13:59.350] pod "busybox1" force deleted
I1010 14:13:59.350] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1010 14:13:59.351] has:Object 'Kind' is missing
I1010 14:13:59.455] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:13:59.629] (Breplicationcontroller/busybox0 created
I1010 14:13:59.634] replicationcontroller/busybox1 created
W1010 14:13:59.735] E1010 14:13:58.226966   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:59.735] I1010 14:13:58.636690   53040 namespace_controller.go:185] Namespace has been deleted non-native-resources
W1010 14:13:59.736] E1010 14:13:58.924344   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:59.736] E1010 14:13:59.018924   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:59.737] E1010 14:13:59.127032   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:59.737] E1010 14:13:59.228517   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:13:59.738] I1010 14:13:59.633552   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716834-32152", Name:"busybox0", UID:"aeb5f6c8-f5af-4007-a7b2-8b8c13d7ae89", APIVersion:"v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-82hvr
W1010 14:13:59.738] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1010 14:13:59.739] I1010 14:13:59.637892   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716834-32152", Name:"busybox1", UID:"99271351-1ac2-4bba-9835-9c1b35460ae4", APIVersion:"v1", ResourceVersion:"990", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-scq9k
I1010 14:13:59.840] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:59.852] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:13:59.959] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I1010 14:14:00.057] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I1010 14:14:00.251] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1010 14:14:00.349] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1010 14:14:00.351] (BSuccessful
I1010 14:14:00.352] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1010 14:14:00.352] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1010 14:14:00.352] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:00.352] has:Object 'Kind' is missing
I1010 14:14:00.434] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1010 14:14:00.526] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1010 14:14:00.629] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:14:00.721] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I1010 14:14:00.815] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I1010 14:14:01.010] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1010 14:14:01.102] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1010 14:14:01.105] (BSuccessful
I1010 14:14:01.105] message:service/busybox0 exposed
I1010 14:14:01.105] service/busybox1 exposed
I1010 14:14:01.106] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:01.106] has:Object 'Kind' is missing
I1010 14:14:01.208] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:14:01.301] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I1010 14:14:01.400] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I1010 14:14:01.605] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I1010 14:14:01.697] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I1010 14:14:01.700] (BSuccessful
I1010 14:14:01.701] message:replicationcontroller/busybox0 scaled
I1010 14:14:01.701] replicationcontroller/busybox1 scaled
I1010 14:14:01.701] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:01.702] has:Object 'Kind' is missing
I1010 14:14:01.800] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:14:02.027] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:14:02.031] (BSuccessful
I1010 14:14:02.032] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1010 14:14:02.032] replicationcontroller "busybox0" force deleted
I1010 14:14:02.032] replicationcontroller "busybox1" force deleted
I1010 14:14:02.033] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:02.033] has:Object 'Kind' is missing
I1010 14:14:02.132] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:14:02.320] (Bdeployment.apps/nginx1-deployment created
I1010 14:14:02.323] deployment.apps/nginx0-deployment created
W1010 14:14:02.424] E1010 14:13:59.926070   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.424] E1010 14:14:00.020500   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.425] E1010 14:14:00.128958   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.425] E1010 14:14:00.230274   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.425] E1010 14:14:00.927544   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.426] E1010 14:14:01.022171   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.426] E1010 14:14:01.131038   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.426] E1010 14:14:01.232376   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.427] I1010 14:14:01.493066   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716834-32152", Name:"busybox0", UID:"aeb5f6c8-f5af-4007-a7b2-8b8c13d7ae89", APIVersion:"v1", ResourceVersion:"1009", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-x266f
W1010 14:14:02.427] I1010 14:14:01.501627   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716834-32152", Name:"busybox1", UID:"99271351-1ac2-4bba-9835-9c1b35460ae4", APIVersion:"v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jnb4h
W1010 14:14:02.427] E1010 14:14:01.934634   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.428] E1010 14:14:02.023479   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.428] E1010 14:14:02.133298   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.428] E1010 14:14:02.234076   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:02.429] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1010 14:14:02.429] I1010 14:14:02.324272   53040 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1570716834-32152", Name:"nginx1-deployment", UID:"078b134a-85ee-456b-a8c5-21d953a762e7", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W1010 14:14:02.429] I1010 14:14:02.326431   53040 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1570716834-32152", Name:"nginx0-deployment", UID:"d1624fd2-0df5-4351-afb0-c9d9052395c8", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W1010 14:14:02.430] I1010 14:14:02.329757   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716834-32152", Name:"nginx1-deployment-7bdbbfb5cf", UID:"c595fd25-8f57-4d53-95c4-4e40ae8035ca", APIVersion:"apps/v1", ResourceVersion:"1031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-w8jfh
W1010 14:14:02.430] I1010 14:14:02.330356   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716834-32152", Name:"nginx0-deployment-57c6bff7f6", UID:"d18d8a2d-2a95-4dc2-95c9-8a5d0b445231", APIVersion:"apps/v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-9b65g
W1010 14:14:02.431] I1010 14:14:02.332628   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716834-32152", Name:"nginx1-deployment-7bdbbfb5cf", UID:"c595fd25-8f57-4d53-95c4-4e40ae8035ca", APIVersion:"apps/v1", ResourceVersion:"1031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-k2thp
W1010 14:14:02.431] I1010 14:14:02.335260   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1570716834-32152", Name:"nginx0-deployment-57c6bff7f6", UID:"d18d8a2d-2a95-4dc2-95c9-8a5d0b445231", APIVersion:"apps/v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-b8nvt
I1010 14:14:02.532] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1010 14:14:02.563] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1010 14:14:02.782] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1010 14:14:02.784] (BSuccessful
I1010 14:14:02.785] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I1010 14:14:02.785] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I1010 14:14:02.785] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1010 14:14:02.786] has:Object 'Kind' is missing
I1010 14:14:02.882] deployment.apps/nginx1-deployment paused
I1010 14:14:02.886] deployment.apps/nginx0-deployment paused
W1010 14:14:02.987] E1010 14:14:02.936829   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:03.025] E1010 14:14:03.024900   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:03.126] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1010 14:14:03.126] (BSuccessful
I1010 14:14:03.127] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1010 14:14:03.127] has:Object 'Kind' is missing
I1010 14:14:03.127] deployment.apps/nginx1-deployment resumed
I1010 14:14:03.128] deployment.apps/nginx0-deployment resumed
I1010 14:14:03.223] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I1010 14:14:03.226] (BSuccessful
I1010 14:14:03.226] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1010 14:14:03.227] has:Object 'Kind' is missing
W1010 14:14:03.327] E1010 14:14:03.134751   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:03.328] E1010 14:14:03.235473   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:03.412] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1010 14:14:03.428] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1010 14:14:03.529] Successful
I1010 14:14:03.530] message:deployment.apps/nginx1-deployment 
I1010 14:14:03.531] REVISION  CHANGE-CAUSE
I1010 14:14:03.531] 1         <none>
I1010 14:14:03.532] 
I1010 14:14:03.532] deployment.apps/nginx0-deployment 
I1010 14:14:03.533] REVISION  CHANGE-CAUSE
I1010 14:14:03.533] 1         <none>
I1010 14:14:03.533] 
I1010 14:14:03.535] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1010 14:14:03.535] has:nginx0-deployment
I1010 14:14:03.536] Successful
I1010 14:14:03.536] message:deployment.apps/nginx1-deployment 
I1010 14:14:03.536] REVISION  CHANGE-CAUSE
I1010 14:14:03.537] 1         <none>
I1010 14:14:03.537] 
I1010 14:14:03.537] deployment.apps/nginx0-deployment 
I1010 14:14:03.537] REVISION  CHANGE-CAUSE
I1010 14:14:03.537] 1         <none>
I1010 14:14:03.537] 
I1010 14:14:03.538] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1010 14:14:03.538] has:nginx1-deployment
I1010 14:14:03.538] Successful
I1010 14:14:03.538] message:deployment.apps/nginx1-deployment 
I1010 14:14:03.538] REVISION  CHANGE-CAUSE
I1010 14:14:03.538] 1         <none>
I1010 14:14:03.538] 
I1010 14:14:03.539] deployment.apps/nginx0-deployment 
I1010 14:14:03.539] REVISION  CHANGE-CAUSE
I1010 14:14:03.539] 1         <none>
I1010 14:14:03.539] 
I1010 14:14:03.539] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1010 14:14:03.539] has:Object 'Kind' is missing
I1010 14:14:03.539] deployment.apps "nginx1-deployment" force deleted
I1010 14:14:03.540] deployment.apps "nginx0-deployment" force deleted
W1010 14:14:03.939] E1010 14:14:03.938717   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:04.027] E1010 14:14:04.026594   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:04.137] E1010 14:14:04.136304   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:04.237] E1010 14:14:04.237061   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:04.538] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:14:04.710] (Breplicationcontroller/busybox0 created
I1010 14:14:04.718] replicationcontroller/busybox1 created
W1010 14:14:04.819] I1010 14:14:04.713241   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716834-32152", Name:"busybox0", UID:"6f40ad5e-aca5-452f-b6d8-c7d38fddae33", APIVersion:"v1", ResourceVersion:"1078", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-g7q7s
W1010 14:14:04.819] I1010 14:14:04.717427   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716834-32152", Name:"busybox1", UID:"381ab9f4-3b7b-4060-8b3f-08b09e51f6ca", APIVersion:"v1", ResourceVersion:"1080", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-s269t
W1010 14:14:04.820] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1010 14:14:04.920] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1010 14:14:04.926] (BSuccessful
I1010 14:14:04.926] message:no rollbacker has been implemented for "ReplicationController"
I1010 14:14:04.927] no rollbacker has been implemented for "ReplicationController"
I1010 14:14:04.927] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:04.928] has:no rollbacker has been implemented for "ReplicationController"
I1010 14:14:04.929] Successful
I1010 14:14:04.930] message:no rollbacker has been implemented for "ReplicationController"
I1010 14:14:04.930] no rollbacker has been implemented for "ReplicationController"
I1010 14:14:04.931] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:04.931] has:Object 'Kind' is missing
I1010 14:14:05.029] Successful
I1010 14:14:05.030] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:05.030] error: replicationcontrollers "busybox0" pausing is not supported
I1010 14:14:05.031] error: replicationcontrollers "busybox1" pausing is not supported
I1010 14:14:05.031] has:Object 'Kind' is missing
I1010 14:14:05.032] Successful
I1010 14:14:05.033] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:05.033] error: replicationcontrollers "busybox0" pausing is not supported
I1010 14:14:05.033] error: replicationcontrollers "busybox1" pausing is not supported
I1010 14:14:05.034] has:replicationcontrollers "busybox0" pausing is not supported
I1010 14:14:05.035] Successful
I1010 14:14:05.036] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:05.036] error: replicationcontrollers "busybox0" pausing is not supported
I1010 14:14:05.036] error: replicationcontrollers "busybox1" pausing is not supported
I1010 14:14:05.037] has:replicationcontrollers "busybox1" pausing is not supported
I1010 14:14:05.132] Successful
I1010 14:14:05.133] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:05.133] error: replicationcontrollers "busybox0" resuming is not supported
I1010 14:14:05.133] error: replicationcontrollers "busybox1" resuming is not supported
I1010 14:14:05.134] has:Object 'Kind' is missing
I1010 14:14:05.135] Successful
I1010 14:14:05.136] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:05.136] error: replicationcontrollers "busybox0" resuming is not supported
I1010 14:14:05.136] error: replicationcontrollers "busybox1" resuming is not supported
I1010 14:14:05.136] has:replicationcontrollers "busybox0" resuming is not supported
I1010 14:14:05.138] Successful
I1010 14:14:05.139] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1010 14:14:05.139] error: replicationcontrollers "busybox0" resuming is not supported
I1010 14:14:05.139] error: replicationcontrollers "busybox1" resuming is not supported
I1010 14:14:05.139] has:replicationcontrollers "busybox1" resuming is not supported
I1010 14:14:05.219] replicationcontroller "busybox0" force deleted
I1010 14:14:05.224] replicationcontroller "busybox1" force deleted
W1010 14:14:05.325] E1010 14:14:04.940100   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:05.325] E1010 14:14:05.028121   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:05.326] E1010 14:14:05.138364   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:05.326] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1010 14:14:05.326] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W1010 14:14:05.327] E1010 14:14:05.238470   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:05.942] E1010 14:14:05.941773   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:06.030] E1010 14:14:06.029938   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:06.140] E1010 14:14:06.139874   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:06.240] E1010 14:14:06.240020   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:06.341] Recording: run_namespace_tests
I1010 14:14:06.341] Running command: run_namespace_tests
I1010 14:14:06.341] 
I1010 14:14:06.341] +++ Running case: test-cmd.run_namespace_tests 
I1010 14:14:06.341] +++ working dir: /go/src/k8s.io/kubernetes
I1010 14:14:06.342] +++ command: run_namespace_tests
... skipping 2 lines ...
I1010 14:14:06.457] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1010 14:14:06.539] (Bnamespace "my-namespace" deleted
W1010 14:14:06.640] I1010 14:14:06.519761   53040 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1010 14:14:06.640] I1010 14:14:06.519866   53040 shared_informer.go:204] Caches are synced for garbage collector 
W1010 14:14:06.640] I1010 14:14:06.638738   53040 shared_informer.go:197] Waiting for caches to sync for resource quota
W1010 14:14:06.640] I1010 14:14:06.638792   53040 shared_informer.go:204] Caches are synced for resource quota 
W1010 14:14:06.944] E1010 14:14:06.943415   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:07.032] E1010 14:14:07.031501   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:07.142] E1010 14:14:07.141497   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:07.242] E1010 14:14:07.241646   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:07.945] E1010 14:14:07.944969   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:08.034] E1010 14:14:08.033369   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:08.144] E1010 14:14:08.143377   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:08.244] E1010 14:14:08.243567   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:08.947] E1010 14:14:08.946577   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:09.035] E1010 14:14:09.034817   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:09.145] E1010 14:14:09.144943   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:09.245] E1010 14:14:09.244983   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:09.948] E1010 14:14:09.948123   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:10.037] E1010 14:14:10.036344   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:10.147] E1010 14:14:10.146436   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:10.247] E1010 14:14:10.246790   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:10.950] E1010 14:14:10.949705   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:11.038] E1010 14:14:11.038170   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:11.148] E1010 14:14:11.148253   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:11.249] E1010 14:14:11.248372   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:11.636] namespace/my-namespace condition met
I1010 14:14:11.727] Successful
I1010 14:14:11.727] message:Error from server (NotFound): namespaces "my-namespace" not found
I1010 14:14:11.727] has: not found
I1010 14:14:11.807] namespace/my-namespace created
I1010 14:14:11.913] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1010 14:14:12.139] (BSuccessful
I1010 14:14:12.139] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1010 14:14:12.140] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I1010 14:14:12.144] namespace "namespace-1570716811-19285" deleted
I1010 14:14:12.144] namespace "namespace-1570716812-15753" deleted
I1010 14:14:12.144] namespace "namespace-1570716814-4896" deleted
I1010 14:14:12.144] namespace "namespace-1570716815-7612" deleted
I1010 14:14:12.145] namespace "namespace-1570716834-30492" deleted
I1010 14:14:12.145] namespace "namespace-1570716834-32152" deleted
I1010 14:14:12.145] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1010 14:14:12.145] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1010 14:14:12.145] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1010 14:14:12.145] has:warning: deleting cluster-scoped resources
I1010 14:14:12.145] Successful
I1010 14:14:12.145] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1010 14:14:12.146] namespace "kube-node-lease" deleted
I1010 14:14:12.146] namespace "my-namespace" deleted
I1010 14:14:12.146] namespace "namespace-1570716695-14624" deleted
... skipping 27 lines ...
I1010 14:14:12.148] namespace "namespace-1570716811-19285" deleted
I1010 14:14:12.149] namespace "namespace-1570716812-15753" deleted
I1010 14:14:12.149] namespace "namespace-1570716814-4896" deleted
I1010 14:14:12.149] namespace "namespace-1570716815-7612" deleted
I1010 14:14:12.149] namespace "namespace-1570716834-30492" deleted
I1010 14:14:12.149] namespace "namespace-1570716834-32152" deleted
I1010 14:14:12.149] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1010 14:14:12.150] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1010 14:14:12.150] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1010 14:14:12.150] has:namespace "my-namespace" deleted
W1010 14:14:12.251] E1010 14:14:11.951294   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:12.251] E1010 14:14:12.039519   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:12.252] E1010 14:14:12.149972   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:12.252] E1010 14:14:12.249573   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:12.353] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I1010 14:14:12.353] (Bnamespace/other created
I1010 14:14:12.452] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I1010 14:14:12.561] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:14:12.744] (Bpod/valid-pod created
I1010 14:14:12.862] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:14:12.966] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:14:13.062] (BSuccessful
I1010 14:14:13.063] message:error: a resource cannot be retrieved by name across all namespaces
I1010 14:14:13.063] has:a resource cannot be retrieved by name across all namespaces
I1010 14:14:13.168] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1010 14:14:13.250] (Bpod "valid-pod" force deleted
W1010 14:14:13.351] E1010 14:14:12.952426   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:13.352] E1010 14:14:13.041044   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:13.352] E1010 14:14:13.151303   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:13.353] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1010 14:14:13.353] E1010 14:14:13.250656   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:13.453] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:14:13.454] (Bnamespace "other" deleted
W1010 14:14:13.954] E1010 14:14:13.953675   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:14.043] E1010 14:14:14.042766   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:14.153] E1010 14:14:14.152915   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:14.252] E1010 14:14:14.252181   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:14.956] E1010 14:14:14.955704   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:15.045] E1010 14:14:15.044398   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:15.141] I1010 14:14:15.141038   53040 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1570716834-32152
W1010 14:14:15.145] I1010 14:14:15.145067   53040 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1570716834-32152
W1010 14:14:15.155] E1010 14:14:15.154508   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:15.254] E1010 14:14:15.253712   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:15.957] E1010 14:14:15.957081   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:16.046] E1010 14:14:16.045933   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:16.156] E1010 14:14:16.156009   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:16.255] E1010 14:14:16.255070   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:16.959] E1010 14:14:16.959005   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:17.056] E1010 14:14:17.055468   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:17.161] E1010 14:14:17.160184   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:17.260] E1010 14:14:17.259526   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:17.961] E1010 14:14:17.960691   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:18.057] E1010 14:14:18.057069   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:18.162] E1010 14:14:18.161933   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:18.261] E1010 14:14:18.261087   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:18.577] +++ exit code: 0
I1010 14:14:18.616] Recording: run_secrets_test
I1010 14:14:18.616] Running command: run_secrets_test
I1010 14:14:18.646] 
I1010 14:14:18.648] +++ Running case: test-cmd.run_secrets_test 
I1010 14:14:18.651] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 58 lines ...
I1010 14:14:20.709] (Bsecret "test-secret" deleted
I1010 14:14:20.796] secret/test-secret created
I1010 14:14:20.889] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1010 14:14:20.977] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I1010 14:14:21.052] (Bsecret "test-secret" deleted
W1010 14:14:21.153] I1010 14:14:18.900196   68833 loader.go:375] Config loaded from file:  /tmp/tmp.u3zMU9pmS4/.kube/config
W1010 14:14:21.154] E1010 14:14:18.962350   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.154] E1010 14:14:19.058611   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.155] E1010 14:14:19.163605   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.155] E1010 14:14:19.262365   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.155] E1010 14:14:19.964083   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.156] E1010 14:14:20.060054   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.156] E1010 14:14:20.164782   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.156] E1010 14:14:20.264205   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.157] E1010 14:14:20.965460   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.157] E1010 14:14:21.061250   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.166] E1010 14:14:21.166329   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:21.266] E1010 14:14:21.265689   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:21.367] secret/secret-string-data created
I1010 14:14:21.367] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1010 14:14:21.410] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1010 14:14:21.503] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I1010 14:14:21.579] (Bsecret "secret-string-data" deleted
I1010 14:14:21.675] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:14:21.845] (Bsecret "test-secret" deleted
W1010 14:14:21.946] I1010 14:14:21.719740   53040 namespace_controller.go:185] Namespace has been deleted my-namespace
W1010 14:14:21.967] E1010 14:14:21.967159   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:22.063] E1010 14:14:22.062768   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:22.164] namespace "test-secrets" deleted
W1010 14:14:22.264] E1010 14:14:22.168089   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:22.265] I1010 14:14:22.220473   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716698-16592
W1010 14:14:22.265] I1010 14:14:22.227786   53040 namespace_controller.go:185] Namespace has been deleted kube-node-lease
W1010 14:14:22.265] I1010 14:14:22.232725   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716721-14388
W1010 14:14:22.265] I1010 14:14:22.242922   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716695-14624
W1010 14:14:22.266] I1010 14:14:22.244865   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716722-6349
W1010 14:14:22.266] I1010 14:14:22.247949   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716715-22598
W1010 14:14:22.266] I1010 14:14:22.250824   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716716-7661
W1010 14:14:22.266] I1010 14:14:22.266473   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716703-6360
W1010 14:14:22.268] E1010 14:14:22.267674   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:22.270] I1010 14:14:22.270464   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716723-31729
W1010 14:14:22.295] I1010 14:14:22.295265   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716711-24827
W1010 14:14:22.420] I1010 14:14:22.420149   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716734-15016
W1010 14:14:22.426] I1010 14:14:22.426217   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716751-21353
W1010 14:14:22.442] I1010 14:14:22.441484   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716735-1818
W1010 14:14:22.448] I1010 14:14:22.447605   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716753-15043
... skipping 15 lines ...
W1010 14:14:22.707] I1010 14:14:22.707099   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716811-19285
W1010 14:14:22.799] I1010 14:14:22.798657   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716815-7612
W1010 14:14:22.799] I1010 14:14:22.798718   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716812-15753
W1010 14:14:22.804] I1010 14:14:22.804290   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716814-4896
W1010 14:14:22.813] I1010 14:14:22.813174   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716834-30492
W1010 14:14:22.857] I1010 14:14:22.856906   53040 namespace_controller.go:185] Namespace has been deleted namespace-1570716834-32152
W1010 14:14:22.969] E1010 14:14:22.968583   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:23.065] E1010 14:14:23.064267   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:23.170] E1010 14:14:23.169684   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:23.269] E1010 14:14:23.269252   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:23.545] I1010 14:14:23.544649   53040 namespace_controller.go:185] Namespace has been deleted other
W1010 14:14:23.970] E1010 14:14:23.970115   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:24.066] E1010 14:14:24.065825   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:24.171] E1010 14:14:24.171145   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:24.271] E1010 14:14:24.270979   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:24.971] E1010 14:14:24.971180   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:25.068] E1010 14:14:25.067460   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:25.173] E1010 14:14:25.172584   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:25.273] E1010 14:14:25.272582   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:25.973] E1010 14:14:25.972595   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:26.069] E1010 14:14:26.069040   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:26.174] E1010 14:14:26.174051   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:26.274] E1010 14:14:26.274149   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:26.974] E1010 14:14:26.973576   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:27.071] E1010 14:14:27.070620   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:27.172] +++ exit code: 0
I1010 14:14:27.173] Recording: run_configmap_tests
I1010 14:14:27.173] Running command: run_configmap_tests
I1010 14:14:27.173] 
I1010 14:14:27.173] +++ Running case: test-cmd.run_configmap_tests 
I1010 14:14:27.174] +++ working dir: /go/src/k8s.io/kubernetes
I1010 14:14:27.174] +++ command: run_configmap_tests
I1010 14:14:27.175] +++ [1010 14:14:27] Creating namespace namespace-1570716867-26368
I1010 14:14:27.232] namespace/namespace-1570716867-26368 created
I1010 14:14:27.304] Context "test" modified.
I1010 14:14:27.312] +++ [1010 14:14:27] Testing configmaps
W1010 14:14:27.413] E1010 14:14:27.175910   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:27.414] E1010 14:14:27.275491   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:27.527] configmap/test-configmap created
I1010 14:14:27.638] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I1010 14:14:27.720] (Bconfigmap "test-configmap" deleted
I1010 14:14:27.827] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I1010 14:14:27.906] (Bnamespace/test-configmaps created
I1010 14:14:27.998] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I1010 14:14:28.346] configmap/test-binary-configmap created
I1010 14:14:28.448] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I1010 14:14:28.538] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I1010 14:14:28.803] (Bconfigmap "test-configmap" deleted
I1010 14:14:28.892] configmap "test-binary-configmap" deleted
I1010 14:14:28.974] namespace "test-configmaps" deleted
W1010 14:14:29.075] E1010 14:14:27.975027   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:29.076] E1010 14:14:28.072388   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:29.076] E1010 14:14:28.177114   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:29.077] E1010 14:14:28.276914   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:29.077] E1010 14:14:28.976187   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:29.077] E1010 14:14:29.074424   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:29.179] E1010 14:14:29.178698   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:29.279] E1010 14:14:29.278605   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:29.978] E1010 14:14:29.977733   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:30.076] E1010 14:14:30.075755   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:30.180] E1010 14:14:30.180083   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:30.281] E1010 14:14:30.280320   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:30.980] E1010 14:14:30.979577   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:31.078] E1010 14:14:31.077241   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:31.183] E1010 14:14:31.182206   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:31.283] E1010 14:14:31.282151   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:31.982] E1010 14:14:31.981464   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:32.032] I1010 14:14:32.031590   53040 namespace_controller.go:185] Namespace has been deleted test-secrets
W1010 14:14:32.079] E1010 14:14:32.078926   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:32.184] E1010 14:14:32.184115   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:32.284] E1010 14:14:32.283914   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:32.984] E1010 14:14:32.983345   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:33.081] E1010 14:14:33.081097   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:33.186] E1010 14:14:33.185728   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:33.286] E1010 14:14:33.285655   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:33.985] E1010 14:14:33.984227   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:34.083] E1010 14:14:34.082431   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:34.184] +++ exit code: 0
I1010 14:14:34.184] Recording: run_client_config_tests
I1010 14:14:34.184] Running command: run_client_config_tests
I1010 14:14:34.184] 
I1010 14:14:34.184] +++ Running case: test-cmd.run_client_config_tests 
I1010 14:14:34.185] +++ working dir: /go/src/k8s.io/kubernetes
I1010 14:14:34.185] +++ command: run_client_config_tests
I1010 14:14:34.200] +++ [1010 14:14:34] Creating namespace namespace-1570716874-3997
I1010 14:14:34.276] namespace/namespace-1570716874-3997 created
I1010 14:14:34.356] Context "test" modified.
I1010 14:14:34.365] +++ [1010 14:14:34] Testing client config
I1010 14:14:34.441] Successful
I1010 14:14:34.442] message:error: stat missing: no such file or directory
I1010 14:14:34.442] has:missing: no such file or directory
I1010 14:14:34.519] Successful
I1010 14:14:34.519] message:error: stat missing: no such file or directory
I1010 14:14:34.519] has:missing: no such file or directory
I1010 14:14:34.593] Successful
I1010 14:14:34.593] message:error: stat missing: no such file or directory
I1010 14:14:34.593] has:missing: no such file or directory
I1010 14:14:34.671] Successful
I1010 14:14:34.671] message:Error in configuration: context was not found for specified context: missing-context
I1010 14:14:34.671] has:context was not found for specified context: missing-context
I1010 14:14:34.747] Successful
I1010 14:14:34.747] message:error: no server found for cluster "missing-cluster"
I1010 14:14:34.748] has:no server found for cluster "missing-cluster"
I1010 14:14:34.825] Successful
I1010 14:14:34.826] message:error: auth info "missing-user" does not exist
I1010 14:14:34.826] has:auth info "missing-user" does not exist
W1010 14:14:34.927] E1010 14:14:34.187492   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:34.928] E1010 14:14:34.287080   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:34.986] E1010 14:14:34.986289   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:35.084] E1010 14:14:35.084029   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:35.185] Successful
I1010 14:14:35.186] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1010 14:14:35.187] has:error loading config file
I1010 14:14:35.187] Successful
I1010 14:14:35.188] message:error: stat missing-config: no such file or directory
I1010 14:14:35.188] has:no such file or directory
I1010 14:14:35.189] +++ exit code: 0
I1010 14:14:35.189] Recording: run_service_accounts_tests
I1010 14:14:35.189] Running command: run_service_accounts_tests
I1010 14:14:35.190] 
I1010 14:14:35.190] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 4 lines ...
I1010 14:14:35.347] Context "test" modified.
I1010 14:14:35.357] +++ [1010 14:14:35] Testing service accounts
I1010 14:14:35.464] core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
I1010 14:14:35.543] (Bnamespace/test-service-accounts created
I1010 14:14:35.651] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I1010 14:14:35.731] (Bserviceaccount/test-service-account created
W1010 14:14:35.831] E1010 14:14:35.189155   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:35.832] E1010 14:14:35.288511   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:35.933] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I1010 14:14:35.933] (Bserviceaccount "test-service-account" deleted
I1010 14:14:36.023] namespace "test-service-accounts" deleted
W1010 14:14:36.124] E1010 14:14:35.987973   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:36.125] E1010 14:14:36.085803   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:36.191] E1010 14:14:36.190891   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:36.291] E1010 14:14:36.290685   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:36.991] E1010 14:14:36.989963   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:37.088] E1010 14:14:37.087689   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:37.193] E1010 14:14:37.192759   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:37.293] E1010 14:14:37.292507   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:37.992] E1010 14:14:37.991628   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:38.090] E1010 14:14:38.089336   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:38.195] E1010 14:14:38.194725   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:38.295] E1010 14:14:38.294180   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:38.994] E1010 14:14:38.993243   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:39.069] I1010 14:14:39.068184   53040 namespace_controller.go:185] Namespace has been deleted test-configmaps
W1010 14:14:39.091] E1010 14:14:39.091034   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:39.197] E1010 14:14:39.196631   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:39.296] E1010 14:14:39.295994   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:39.995] E1010 14:14:39.994788   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:40.093] E1010 14:14:40.092580   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:40.199] E1010 14:14:40.198275   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:40.298] E1010 14:14:40.297791   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:40.997] E1010 14:14:40.996344   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:41.094] E1010 14:14:41.093582   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:41.195] +++ exit code: 0
I1010 14:14:41.195] Recording: run_job_tests
I1010 14:14:41.195] Running command: run_job_tests
I1010 14:14:41.223] 
I1010 14:14:41.226] +++ Running case: test-cmd.run_job_tests 
I1010 14:14:41.230] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I1010 14:14:42.052] Labels:                        run=pi
I1010 14:14:42.052] Annotations:                   <none>
I1010 14:14:42.052] Schedule:                      59 23 31 2 *
I1010 14:14:42.052] Concurrency Policy:            Allow
I1010 14:14:42.053] Suspend:                       False
I1010 14:14:42.053] Successful Job History Limit:  3
I1010 14:14:42.053] Failed Job History Limit:      1
I1010 14:14:42.053] Starting Deadline Seconds:     <unset>
I1010 14:14:42.053] Selector:                      <unset>
I1010 14:14:42.053] Parallelism:                   <unset>
I1010 14:14:42.053] Completions:                   <unset>
I1010 14:14:42.053] Pod Template:
I1010 14:14:42.053]   Labels:  run=pi
... skipping 18 lines ...
I1010 14:14:42.055] Events:              <none>
I1010 14:14:42.153] Successful
I1010 14:14:42.153] message:job.batch/test-job
I1010 14:14:42.153] has:job.batch/test-job
I1010 14:14:42.258] batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
I1010 14:14:42.350] (Bjob.batch/test-job created
W1010 14:14:42.451] E1010 14:14:41.200374   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:42.452] E1010 14:14:41.299549   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:42.452] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1010 14:14:42.452] E1010 14:14:41.998019   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:42.453] E1010 14:14:42.095538   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:42.453] E1010 14:14:42.202121   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:42.453] E1010 14:14:42.303242   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:42.454] I1010 14:14:42.344655   53040 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"473955b7-babf-429d-84a8-7f554cbfe9b0", APIVersion:"batch/v1", ResourceVersion:"1401", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-xwjbt
I1010 14:14:42.554] batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
I1010 14:14:42.555] (BNAME       COMPLETIONS   DURATION   AGE
I1010 14:14:42.555] test-job   0/1           0s         0s
I1010 14:14:42.629] Name:           test-job
I1010 14:14:42.629] Namespace:      test-jobs
... skipping 3 lines ...
I1010 14:14:42.630]                 run=pi
I1010 14:14:42.630] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1010 14:14:42.630] Controlled By:  CronJob/pi
I1010 14:14:42.630] Parallelism:    1
I1010 14:14:42.630] Completions:    1
I1010 14:14:42.630] Start Time:     Thu, 10 Oct 2019 14:14:42 +0000
I1010 14:14:42.630] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1010 14:14:42.630] Pod Template:
I1010 14:14:42.630]   Labels:  controller-uid=473955b7-babf-429d-84a8-7f554cbfe9b0
I1010 14:14:42.631]            job-name=test-job
I1010 14:14:42.631]            run=pi
I1010 14:14:42.631]   Containers:
I1010 14:14:42.631]    pi:
... skipping 15 lines ...
I1010 14:14:42.633]   Type    Reason            Age   From            Message
I1010 14:14:42.633]   ----    ------            ----  ----            -------
I1010 14:14:42.633]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-xwjbt
I1010 14:14:42.723] job.batch "test-job" deleted
I1010 14:14:42.827] cronjob.batch "pi" deleted
I1010 14:14:42.913] namespace "test-jobs" deleted
W1010 14:14:43.014] E1010 14:14:42.999510   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:43.097] E1010 14:14:43.096937   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:43.204] E1010 14:14:43.203929   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:43.305] E1010 14:14:43.304923   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:44.002] E1010 14:14:44.001264   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:44.099] E1010 14:14:44.098480   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:44.206] E1010 14:14:44.205652   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:44.307] E1010 14:14:44.307019   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:45.004] E1010 14:14:45.003796   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:45.101] E1010 14:14:45.100299   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:45.208] E1010 14:14:45.207505   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:45.309] E1010 14:14:45.308953   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:46.006] E1010 14:14:46.005396   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:46.102] E1010 14:14:46.101943   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:46.117] I1010 14:14:46.116812   53040 namespace_controller.go:185] Namespace has been deleted test-service-accounts
W1010 14:14:46.210] E1010 14:14:46.209324   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:46.311] E1010 14:14:46.310945   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:47.007] E1010 14:14:47.006832   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:47.105] E1010 14:14:47.104336   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:47.211] E1010 14:14:47.210742   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:47.313] E1010 14:14:47.312517   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:48.008] E1010 14:14:48.007929   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:48.106] E1010 14:14:48.105926   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:48.207] +++ exit code: 0
I1010 14:14:48.207] Recording: run_create_job_tests
I1010 14:14:48.207] Running command: run_create_job_tests
I1010 14:14:48.207] 
I1010 14:14:48.207] +++ Running case: test-cmd.run_create_job_tests 
I1010 14:14:48.207] +++ working dir: /go/src/k8s.io/kubernetes
I1010 14:14:48.207] +++ command: run_create_job_tests
I1010 14:14:48.208] +++ [1010 14:14:48] Creating namespace namespace-1570716888-28728
I1010 14:14:48.235] namespace/namespace-1570716888-28728 created
I1010 14:14:48.306] Context "test" modified.
I1010 14:14:48.387] job.batch/test-job created
W1010 14:14:48.488] E1010 14:14:48.212137   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:48.489] E1010 14:14:48.314210   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:48.489] I1010 14:14:48.387035   53040 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1570716888-28728", Name:"test-job", UID:"1c992327-f3ad-4e1d-ab98-7594cc60436a", APIVersion:"batch/v1", ResourceVersion:"1422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-fcnn5
I1010 14:14:48.590] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I1010 14:14:48.590] (Bjob.batch "test-job" deleted
I1010 14:14:48.669] job.batch/test-job-pi created
I1010 14:14:48.770] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I1010 14:14:48.847] (Bjob.batch "test-job-pi" deleted
... skipping 16 lines ...
I1010 14:14:49.590] Context "test" modified.
I1010 14:14:49.599] +++ [1010 14:14:49] Testing pod templates
I1010 14:14:49.700] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:14:49.872] (Bpodtemplate/nginx created
W1010 14:14:49.973] I1010 14:14:48.661472   53040 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1570716888-28728", Name:"test-job-pi", UID:"100fe1e3-11df-4d14-b687-5aa89e096df2", APIVersion:"batch/v1", ResourceVersion:"1429", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-r6wbv
W1010 14:14:49.973] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1010 14:14:49.973] E1010 14:14:49.009403   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:49.974] I1010 14:14:49.029206   53040 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1570716888-28728", Name:"my-pi", UID:"ee2882df-c1d2-48a2-9904-e8f8712e9d89", APIVersion:"batch/v1", ResourceVersion:"1437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-2w9qd
W1010 14:14:49.974] E1010 14:14:49.107419   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:49.974] E1010 14:14:49.213402   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:49.974] E1010 14:14:49.315716   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:49.974] I1010 14:14:49.868569   49490 controller.go:606] quota admission added evaluator for: podtemplates
W1010 14:14:50.011] E1010 14:14:50.011198   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:50.109] E1010 14:14:50.108942   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:50.210] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1010 14:14:50.211] (BNAME    CONTAINERS   IMAGES   POD LABELS
I1010 14:14:50.211] nginx   nginx        nginx    name=nginx
I1010 14:14:50.261] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1010 14:14:50.339] (Bpodtemplate "nginx" deleted
I1010 14:14:50.445] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 7 lines ...
I1010 14:14:50.636] Context "test" modified.
I1010 14:14:50.645] +++ [1010 14:14:50] Testing kubectl(v1:services)
I1010 14:14:50.742] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1010 14:14:50.902] (Bservice/redis-master created
I1010 14:14:51.009] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1010 14:14:51.119] (B
I1010 14:14:51.126] core.sh:864: FAIL!
I1010 14:14:51.126] Describe services redis-master
I1010 14:14:51.126]   Expected Match: Name:
I1010 14:14:51.127]   Not found in:
I1010 14:14:51.127] Name:              redis-master
I1010 14:14:51.127] Namespace:         default
I1010 14:14:51.127] Labels:            app=redis
... skipping 8 lines ...
I1010 14:14:51.128] Endpoints:         <none>
I1010 14:14:51.128] Session Affinity:  None
I1010 14:14:51.129] Events:            <none>
I1010 14:14:51.129] (B
I1010 14:14:51.129] 864 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1010 14:14:51.129] (B
W1010 14:14:51.230] E1010 14:14:50.215177   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:51.230] E1010 14:14:50.317651   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:51.231] E1010 14:14:51.013260   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:51.231] E1010 14:14:51.110527   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:51.231] E1010 14:14:51.216474   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:51.320] E1010 14:14:51.319301   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:51.420] core.sh:866: Successful describe
I1010 14:14:51.421] Name:              redis-master
I1010 14:14:51.421] Namespace:         default
I1010 14:14:51.421] Labels:            app=redis
I1010 14:14:51.421]                    role=master
I1010 14:14:51.422]                    tier=backend
... skipping 36 lines ...
I1010 14:14:51.448] TargetPort:        6379/TCP
I1010 14:14:51.448] Endpoints:         <none>
I1010 14:14:51.449] Session Affinity:  None
I1010 14:14:51.449] Events:            <none>
I1010 14:14:51.449] (B
I1010 14:14:51.554] 
I1010 14:14:51.555] FAIL!
I1010 14:14:51.555] Describe services
I1010 14:14:51.555]   Expected Match: Name:
I1010 14:14:51.555]   Not found in:
I1010 14:14:51.555] Name:              kubernetes
I1010 14:14:51.555] Namespace:         default
I1010 14:14:51.555] Labels:            component=apiserver
... skipping 157 lines ...
I1010 14:14:52.161]   type: ClusterIP
I1010 14:14:52.161] status:
I1010 14:14:52.162]   loadBalancer: {}
I1010 14:14:52.246] service/redis-master selector updated
I1010 14:14:52.346] core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
I1010 14:14:52.432] (Bservice/redis-master selector updated
W1010 14:14:52.534] E1010 14:14:52.015080   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:52.535] E1010 14:14:52.112137   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:52.535] E1010 14:14:52.218265   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:52.536] E1010 14:14:52.320910   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:52.636] core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1010 14:14:52.645] (BapiVersion: v1
I1010 14:14:52.646] kind: Service
I1010 14:14:52.646] metadata:
I1010 14:14:52.646]   creationTimestamp: "2019-10-10T14:14:50Z"
I1010 14:14:52.646]   labels:
... skipping 14 lines ...
I1010 14:14:52.648]   selector:
I1010 14:14:52.648]     role: padawan
I1010 14:14:52.648]   sessionAffinity: None
I1010 14:14:52.648]   type: ClusterIP
I1010 14:14:52.648] status:
I1010 14:14:52.649]   loadBalancer: {}
W1010 14:14:52.749] error: you must specify resources by --filename when --local is set.
W1010 14:14:52.750] Example resource specifications include:
W1010 14:14:52.750]    '-f rsrc.yaml'
W1010 14:14:52.750]    '--filename=rsrc.json'
I1010 14:14:52.850] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1010 14:14:53.033] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1010 14:14:53.124] (Bservice "redis-master" deleted
W1010 14:14:53.225] I1010 14:14:53.009272   53040 namespace_controller.go:185] Namespace has been deleted test-jobs
W1010 14:14:53.225] E1010 14:14:53.016820   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:53.226] E1010 14:14:53.113544   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:53.226] E1010 14:14:53.220309   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:53.323] E1010 14:14:53.322296   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:53.424] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1010 14:14:53.424] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1010 14:14:53.517] (Bservice/redis-master created
I1010 14:14:53.626] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1010 14:14:53.719] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1010 14:14:53.878] (Bservice/service-v1-test created
... skipping 2 lines ...
I1010 14:14:54.274] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I1010 14:14:54.356] (Bservice "redis-master" deleted
I1010 14:14:54.447] service "service-v1-test" deleted
I1010 14:14:54.550] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1010 14:14:54.652] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1010 14:14:54.824] (Bservice/redis-master created
W1010 14:14:54.925] E1010 14:14:54.018366   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:54.925] E1010 14:14:54.115051   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:54.925] E1010 14:14:54.221600   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:54.926] E1010 14:14:54.323834   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:55.020] E1010 14:14:55.019908   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:55.117] E1010 14:14:55.116790   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:55.218] service/redis-slave created
I1010 14:14:55.218] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I1010 14:14:55.219] (BSuccessful
I1010 14:14:55.219] message:NAME           RSRC
I1010 14:14:55.219] kubernetes     145
I1010 14:14:55.220] redis-master   1471
... skipping 31 lines ...
I1010 14:14:57.104] Context "test" modified.
I1010 14:14:57.113] +++ [1010 14:14:57] Testing kubectl(v1:daemonsets)
I1010 14:14:57.210] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:14:57.375] (Bdaemonset.apps/bind created
I1010 14:14:57.482] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I1010 14:14:57.674] (Bdaemonset.apps/bind configured
W1010 14:14:57.774] E1010 14:14:55.223475   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.775] E1010 14:14:55.325268   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.775] E1010 14:14:56.021396   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.776] E1010 14:14:56.118342   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.776] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1010 14:14:57.776] E1010 14:14:56.224826   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.776] I1010 14:14:56.232830   53040 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"04b4a6f6-1d3d-44c1-a2fc-6afe32d4060b", APIVersion:"apps/v1", ResourceVersion:"1488", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W1010 14:14:57.777] I1010 14:14:56.240360   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"4bd91def-de63-4b87-abe1-94c16d4e77bb", APIVersion:"apps/v1", ResourceVersion:"1489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-24pdv
W1010 14:14:57.777] I1010 14:14:56.243811   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"4bd91def-de63-4b87-abe1-94c16d4e77bb", APIVersion:"apps/v1", ResourceVersion:"1489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-nkjx6
W1010 14:14:57.777] E1010 14:14:56.326756   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.778] E1010 14:14:57.023060   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.778] E1010 14:14:57.119961   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.778] E1010 14:14:57.226687   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.779] E1010 14:14:57.328890   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:57.779] I1010 14:14:57.373201   49490 controller.go:606] quota admission added evaluator for: daemonsets.apps
W1010 14:14:57.779] I1010 14:14:57.382942   49490 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1010 14:14:57.880] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I1010 14:14:57.881] (Bdaemonset.apps/bind image updated
I1010 14:14:57.975] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I1010 14:14:58.066] (Bdaemonset.apps/bind env updated
I1010 14:14:58.165] apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
I1010 14:14:58.256] (Bdaemonset.apps/bind resource requirements updated
W1010 14:14:58.357] E1010 14:14:58.025039   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:58.358] E1010 14:14:58.121478   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:58.358] E1010 14:14:58.228412   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:58.358] E1010 14:14:58.330481   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:58.459] apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
I1010 14:14:58.459] (Bdaemonset.apps/bind restarted
I1010 14:14:58.558] apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
I1010 14:14:58.638] (Bdaemonset.apps "bind" deleted
I1010 14:14:58.667] +++ exit code: 0
I1010 14:14:58.711] Recording: run_daemonset_history_tests
... skipping 5 lines ...
I1010 14:14:58.767] +++ [1010 14:14:58] Creating namespace namespace-1570716898-18778
I1010 14:14:58.838] namespace/namespace-1570716898-18778 created
I1010 14:14:58.909] Context "test" modified.
I1010 14:14:58.917] +++ [1010 14:14:58] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I1010 14:14:59.010] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:14:59.187] (Bdaemonset.apps/bind created
W1010 14:14:59.288] E1010 14:14:59.026358   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:59.289] E1010 14:14:59.123249   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:59.289] E1010 14:14:59.230437   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:14:59.332] E1010 14:14:59.331921   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:14:59.434] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1570716898-18778"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I1010 14:14:59.434]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I1010 14:14:59.434] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I1010 14:14:59.499] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1010 14:14:59.603] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1010 14:14:59.781] (Bdaemonset.apps/bind configured
... skipping 18 lines ...
I1010 14:15:00.491] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1010 14:15:00.586] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1010 14:15:00.688] (Bdaemonset.apps/bind rolled back
I1010 14:15:00.792] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1010 14:15:00.888] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1010 14:15:00.991] (BSuccessful
I1010 14:15:00.992] message:error: unable to find specified revision 1000000 in history
I1010 14:15:00.992] has:unable to find specified revision
I1010 14:15:01.081] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1010 14:15:01.171] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1010 14:15:01.279] (Bdaemonset.apps/bind rolled back
W1010 14:15:01.380] E1010 14:15:00.027747   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:01.380] E1010 14:15:00.124872   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:01.381] E1010 14:15:00.232242   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:01.381] E1010 14:15:00.333486   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:01.382] E1010 14:15:01.029332   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:01.382] E1010 14:15:01.126067   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:01.382] E1010 14:15:01.233831   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:01.383] E1010 14:15:01.335430   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:15:01.483] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1010 14:15:01.488] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1010 14:15:01.587] (Bapps.sh:99: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1010 14:15:01.660] (Bdaemonset.apps "bind" deleted
I1010 14:15:01.709] +++ exit code: 0
I1010 14:15:01.751] Recording: run_rc_tests
... skipping 5 lines ...
I1010 14:15:01.808] +++ [1010 14:15:01] Creating namespace namespace-1570716901-18737
I1010 14:15:01.886] namespace/namespace-1570716901-18737 created
I1010 14:15:01.970] Context "test" modified.
I1010 14:15:01.979] +++ [1010 14:15:01] Testing kubectl(v1:replicationcontrollers)
I1010 14:15:02.078] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:15:02.256] (Breplicationcontroller/frontend created
W1010 14:15:02.356] E1010 14:15:02.030961   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:02.357] E1010 14:15:02.127334   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:02.357] E1010 14:15:02.235885   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:02.358] I1010 14:15:02.261712   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"frontend", UID:"7da3643a-3255-4388-ae0d-1274f22e69d6", APIVersion:"v1", ResourceVersion:"1565", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-27jtt
W1010 14:15:02.359] I1010 14:15:02.265601   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"frontend", UID:"7da3643a-3255-4388-ae0d-1274f22e69d6", APIVersion:"v1", ResourceVersion:"1565", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jf2dl
W1010 14:15:02.359] I1010 14:15:02.265907   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"frontend", UID:"7da3643a-3255-4388-ae0d-1274f22e69d6", APIVersion:"v1", ResourceVersion:"1565", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ljqht
W1010 14:15:02.359] E1010 14:15:02.336879   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:15:02.460] replicationcontroller "frontend" deleted
I1010 14:15:02.480] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:15:02.588] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1010 14:15:02.762] (Breplicationcontroller/frontend created
W1010 14:15:02.863] I1010 14:15:02.766399   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"frontend", UID:"3b3ed4fd-e4dd-4d89-a0a4-924ac077f829", APIVersion:"v1", ResourceVersion:"1581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-79sk9
W1010 14:15:02.864] I1010 14:15:02.769490   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"frontend", UID:"3b3ed4fd-e4dd-4d89-a0a4-924ac077f829", APIVersion:"v1", ResourceVersion:"1581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kpd6z
W1010 14:15:02.865] I1010 14:15:02.769737   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"frontend", UID:"3b3ed4fd-e4dd-4d89-a0a4-924ac077f829", APIVersion:"v1", ResourceVersion:"1581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-d9pph
I1010 14:15:02.965] core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1010 14:15:02.987] (B
I1010 14:15:02.992] core.sh:1061: FAIL!
I1010 14:15:02.992] Describe rc frontend
I1010 14:15:02.992]   Expected Match: Name:
I1010 14:15:02.993]   Not found in:
I1010 14:15:02.993] Name:         frontend
I1010 14:15:02.993] Namespace:    namespace-1570716901-18737
I1010 14:15:02.993] Selector:     app=guestbook,tier=frontend
I1010 14:15:02.993] Labels:       app=guestbook
I1010 14:15:02.993]               tier=frontend
I1010 14:15:02.993] Annotations:  <none>
I1010 14:15:02.993] Replicas:     3 current / 3 desired
I1010 14:15:02.993] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1010 14:15:02.993] Pod Template:
I1010 14:15:02.993]   Labels:  app=guestbook
I1010 14:15:02.994]            tier=frontend
I1010 14:15:02.994]   Containers:
I1010 14:15:02.994]    php-redis:
I1010 14:15:02.994]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1010 14:15:02.995]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-79sk9
I1010 14:15:02.995]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-kpd6z
I1010 14:15:02.995]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-d9pph
I1010 14:15:02.995] (B
I1010 14:15:02.995] 1061 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1010 14:15:02.995] (B
W1010 14:15:03.096] E1010 14:15:03.032446   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:03.129] E1010 14:15:03.129115   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:15:03.230] core.sh:1063: Successful describe
I1010 14:15:03.231] Name:         frontend
I1010 14:15:03.231] Namespace:    namespace-1570716901-18737
I1010 14:15:03.231] Selector:     app=guestbook,tier=frontend
I1010 14:15:03.231] Labels:       app=guestbook
I1010 14:15:03.231]               tier=frontend
I1010 14:15:03.231] Annotations:  <none>
I1010 14:15:03.231] Replicas:     3 current / 3 desired
I1010 14:15:03.232] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1010 14:15:03.232] Pod Template:
I1010 14:15:03.232]   Labels:  app=guestbook
I1010 14:15:03.232]            tier=frontend
I1010 14:15:03.232]   Containers:
I1010 14:15:03.232]    php-redis:
I1010 14:15:03.232]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1010 14:15:03.234] Namespace:    namespace-1570716901-18737
I1010 14:15:03.234] Selector:     app=guestbook,tier=frontend
I1010 14:15:03.234] Labels:       app=guestbook
I1010 14:15:03.234]               tier=frontend
I1010 14:15:03.234] Annotations:  <none>
I1010 14:15:03.234] Replicas:     3 current / 3 desired
I1010 14:15:03.234] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1010 14:15:03.234] Pod Template:
I1010 14:15:03.234]   Labels:  app=guestbook
I1010 14:15:03.235]            tier=frontend
I1010 14:15:03.235]   Containers:
I1010 14:15:03.235]    php-redis:
I1010 14:15:03.235]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1010 14:15:03.340] Namespace:    namespace-1570716901-18737
I1010 14:15:03.340] Selector:     app=guestbook,tier=frontend
I1010 14:15:03.340] Labels:       app=guestbook
I1010 14:15:03.340]               tier=frontend
I1010 14:15:03.341] Annotations:  <none>
I1010 14:15:03.341] Replicas:     3 current / 3 desired
I1010 14:15:03.341] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1010 14:15:03.341] Pod Template:
I1010 14:15:03.341]   Labels:  app=guestbook
I1010 14:15:03.341]            tier=frontend
I1010 14:15:03.341]   Containers:
I1010 14:15:03.341]    php-redis:
I1010 14:15:03.341]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I1010 14:15:03.342]   Type    Reason            Age   From                    Message
I1010 14:15:03.342]   ----    ------            ----  ----                    -------
I1010 14:15:03.343]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-79sk9
I1010 14:15:03.343]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-kpd6z
I1010 14:15:03.343]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-d9pph
I1010 14:15:03.343] (B
W1010 14:15:03.443] E1010 14:15:03.237501   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:03.444] E1010 14:15:03.338421   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:15:03.544] 
I1010 14:15:03.545] FAIL!
I1010 14:15:03.545] Describe rc
I1010 14:15:03.545]   Expected Match: Name:
I1010 14:15:03.545]   Not found in:
I1010 14:15:03.546] Name:         frontend
I1010 14:15:03.546] Namespace:    namespace-1570716901-18737
I1010 14:15:03.546] Selector:     app=guestbook,tier=frontend
I1010 14:15:03.546] Labels:       app=guestbook
I1010 14:15:03.546]               tier=frontend
I1010 14:15:03.546] Annotations:  <none>
I1010 14:15:03.547] Replicas:     3 current / 3 desired
I1010 14:15:03.547] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1010 14:15:03.547] Pod Template:
I1010 14:15:03.547]   Labels:  app=guestbook
I1010 14:15:03.547]            tier=frontend
I1010 14:15:03.548]   Containers:
I1010 14:15:03.548]    php-redis:
I1010 14:15:03.548]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 20 lines ...
I1010 14:15:03.562] Namespace:    namespace-1570716901-18737
I1010 14:15:03.562] Selector:     app=guestbook,tier=frontend
I1010 14:15:03.562] Labels:       app=guestbook
I1010 14:15:03.562]               tier=frontend
I1010 14:15:03.562] Annotations:  <none>
I1010 14:15:03.563] Replicas:     3 current / 3 desired
I1010 14:15:03.563] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1010 14:15:03.563] Pod Template:
I1010 14:15:03.563]   Labels:  app=guestbook
I1010 14:15:03.563]            tier=frontend
I1010 14:15:03.563]   Containers:
I1010 14:15:03.563]    php-redis:
I1010 14:15:03.563]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1010 14:15:03.663] Namespace:    namespace-1570716901-18737
I1010 14:15:03.663] Selector:     app=guestbook,tier=frontend
I1010 14:15:03.663] Labels:       app=guestbook
I1010 14:15:03.663]               tier=frontend
I1010 14:15:03.663] Annotations:  <none>
I1010 14:15:03.664] Replicas:     3 current / 3 desired
I1010 14:15:03.664] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1010 14:15:03.664] Pod Template:
I1010 14:15:03.664]   Labels:  app=guestbook
I1010 14:15:03.664]            tier=frontend
I1010 14:15:03.664]   Containers:
I1010 14:15:03.664]    php-redis:
I1010 14:15:03.665]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1010 14:15:03.774] Namespace:    namespace-1570716901-18737
I1010 14:15:03.775] Selector:     app=guestbook,tier=frontend
I1010 14:15:03.775] Labels:       app=guestbook
I1010 14:15:03.775]               tier=frontend
I1010 14:15:03.775] Annotations:  <none>
I1010 14:15:03.775] Replicas:     3 current / 3 desired
I1010 14:15:03.775] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1010 14:15:03.775] Pod Template:
I1010 14:15:03.775]   Labels:  app=guestbook
I1010 14:15:03.776]            tier=frontend
I1010 14:15:03.776]   Containers:
I1010 14:15:03.776]    php-redis:
I1010 14:15:03.776]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I1010 14:15:04.591] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I1010 14:15:04.683] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I1010 14:15:04.762] (Breplicationcontroller/frontend scaled
I1010 14:15:04.863] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I1010 14:15:04.947] (Breplicationcontroller "frontend" deleted
W1010 14:15:05.048] I1010 14:15:03.956352   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"frontend", UID:"3b3ed4fd-e4dd-4d89-a0a4-924ac077f829", APIVersion:"v1", ResourceVersion:"1591", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-79sk9
W1010 14:15:05.049] E1010 14:15:04.033964   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:05.049] E1010 14:15:04.130550   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:05.050] error: Expected replicas to be 3, was 2
W1010 14:15:05.050] E1010 14:15:04.239037   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:05.050] E1010 14:15:04.339890   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:05.051] I1010 14:15:04.495203   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"frontend", UID:"3b3ed4fd-e4dd-4d89-a0a4-924ac077f829", APIVersion:"v1", ResourceVersion:"1597", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-k9zzk
W1010 14:15:05.051] I1010 14:15:04.767516   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"frontend", UID:"3b3ed4fd-e4dd-4d89-a0a4-924ac077f829", APIVersion:"v1", ResourceVersion:"1602", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-k9zzk
W1010 14:15:05.052] E1010 14:15:05.035443   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1010 14:15:05.121] I1010 14:15:05.121091   53040 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1570716901-18737", Name:"redis-master", UID:"7ebd27e4-e8c7-40c5-805a-9ac6665154e4", APIVersion:"v1", ResourceVersion:"1610", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-8ldl8
W1010 14:15:05.132] E1010 14:15:05.131790   53040 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1010 14:15