This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 1 failed / 2862 succeeded
Started2019-09-16 09:51
Elapsed29m28s
Revision
Buildergke-prow-ssd-pool-1a225945-9tvq
Refs master:ebd8f9cc
82703:32e67c2e
pod769ac0bb-d867-11e9-af7a-7ecbb7a97bb8
infra-commite1cbc3ccd
pod769ac0bb-d867-11e9-af7a-7ecbb7a97bb8
repok8s.io/kubernetes
repo-commit4640b4f81ec6bcaac176111279f6d50529ab2cf5
repos{u'k8s.io/kubernetes': u'master:ebd8f9ccb5c7a7f54f636db3a8a7dc1397046be6,82703:32e67c2e90fd5f25227992a421949001aa6f8fae'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0916 10:16:27.581570  108971 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0916 10:16:27.581593  108971 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0916 10:16:27.581608  108971 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0916 10:16:27.581620  108971 master.go:259] Using reconciler: 
I0916 10:16:27.593391  108971 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.593831  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.593878  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.599425  108971 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0916 10:16:27.599496  108971 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.600161  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.602297  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.600319  108971 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0916 10:16:27.603436  108971 watch_cache.go:405] Replace watchCache (rev: 30378) 
I0916 10:16:27.607124  108971 store.go:1342] Monitoring events count at <storage-prefix>//events
I0916 10:16:27.607377  108971 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.607288  108971 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0916 10:16:27.610048  108971 watch_cache.go:405] Replace watchCache (rev: 30378) 
I0916 10:16:27.611436  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.611664  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.613538  108971 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0916 10:16:27.613617  108971 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0916 10:16:27.615676  108971 watch_cache.go:405] Replace watchCache (rev: 30378) 
I0916 10:16:27.618581  108971 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.619220  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.619264  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.620969  108971 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0916 10:16:27.621055  108971 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0916 10:16:27.621304  108971 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.621961  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.621994  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.622299  108971 watch_cache.go:405] Replace watchCache (rev: 30379) 
I0916 10:16:27.623792  108971 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0916 10:16:27.623988  108971 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.624251  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.624256  108971 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0916 10:16:27.624277  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.625703  108971 watch_cache.go:405] Replace watchCache (rev: 30379) 
I0916 10:16:27.626932  108971 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0916 10:16:27.626975  108971 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0916 10:16:27.627165  108971 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.628394  108971 watch_cache.go:405] Replace watchCache (rev: 30379) 
I0916 10:16:27.629187  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.629308  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.631431  108971 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0916 10:16:27.631646  108971 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0916 10:16:27.632487  108971 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.633048  108971 watch_cache.go:405] Replace watchCache (rev: 30379) 
I0916 10:16:27.633730  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.633998  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.636256  108971 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0916 10:16:27.636485  108971 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0916 10:16:27.637665  108971 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.638428  108971 watch_cache.go:405] Replace watchCache (rev: 30379) 
I0916 10:16:27.639334  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.639496  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.641079  108971 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0916 10:16:27.641270  108971 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0916 10:16:27.641292  108971 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.641701  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.641742  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.643330  108971 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0916 10:16:27.643543  108971 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.643804  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.643925  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.644025  108971 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0916 10:16:27.646052  108971 watch_cache.go:405] Replace watchCache (rev: 30379) 
I0916 10:16:27.646576  108971 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0916 10:16:27.646839  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.647080  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.647164  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.647273  108971 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0916 10:16:27.651824  108971 watch_cache.go:405] Replace watchCache (rev: 30379) 
I0916 10:16:27.652360  108971 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0916 10:16:27.652607  108971 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.652915  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.652950  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.653100  108971 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0916 10:16:27.654061  108971 watch_cache.go:405] Replace watchCache (rev: 30379) 
I0916 10:16:27.654888  108971 watch_cache.go:405] Replace watchCache (rev: 30379) 
I0916 10:16:27.656211  108971 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0916 10:16:27.656292  108971 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0916 10:16:27.656453  108971 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.656737  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.656769  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.657991  108971 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0916 10:16:27.658050  108971 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.658267  108971 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0916 10:16:27.658447  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.658475  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.659316  108971 watch_cache.go:405] Replace watchCache (rev: 30380) 
I0916 10:16:27.659998  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.660052  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.661029  108971 watch_cache.go:405] Replace watchCache (rev: 30380) 
I0916 10:16:27.661262  108971 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.661495  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.661535  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.662777  108971 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0916 10:16:27.662812  108971 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0916 10:16:27.662841  108971 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0916 10:16:27.663301  108971 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.663526  108971 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.664348  108971 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.664382  108971 watch_cache.go:405] Replace watchCache (rev: 30380) 
I0916 10:16:27.665178  108971 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.666323  108971 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.667186  108971 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.667621  108971 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.667771  108971 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.667967  108971 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.668644  108971 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.669261  108971 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.669499  108971 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.670438  108971 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.670766  108971 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.671329  108971 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.671560  108971 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.672322  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.672565  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.672803  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.673011  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.673331  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.673497  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.673681  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.674471  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.674775  108971 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.675564  108971 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.676439  108971 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.676676  108971 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.676997  108971 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.677861  108971 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.678233  108971 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.678974  108971 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.679617  108971 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.680154  108971 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.680836  108971 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.681152  108971 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.681255  108971 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0916 10:16:27.681277  108971 master.go:461] Enabling API group "authentication.k8s.io".
I0916 10:16:27.681290  108971 master.go:461] Enabling API group "authorization.k8s.io".
I0916 10:16:27.681443  108971 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.681766  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.681802  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.683047  108971 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 10:16:27.683161  108971 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 10:16:27.683265  108971 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.683523  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.683551  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.684617  108971 watch_cache.go:405] Replace watchCache (rev: 30380) 
I0916 10:16:27.685104  108971 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 10:16:27.685336  108971 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.685604  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.685638  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.685800  108971 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 10:16:27.687522  108971 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 10:16:27.687552  108971 master.go:461] Enabling API group "autoscaling".
I0916 10:16:27.687627  108971 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 10:16:27.687765  108971 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.688039  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.688075  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.693335  108971 watch_cache.go:405] Replace watchCache (rev: 30381) 
I0916 10:16:27.693428  108971 watch_cache.go:405] Replace watchCache (rev: 30381) 
I0916 10:16:27.693688  108971 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0916 10:16:27.693769  108971 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0916 10:16:27.694063  108971 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.694595  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.694629  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.696820  108971 watch_cache.go:405] Replace watchCache (rev: 30382) 
I0916 10:16:27.698390  108971 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0916 10:16:27.698435  108971 master.go:461] Enabling API group "batch".
I0916 10:16:27.698617  108971 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0916 10:16:27.698667  108971 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.699076  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.699120  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.700427  108971 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0916 10:16:27.700485  108971 master.go:461] Enabling API group "certificates.k8s.io".
I0916 10:16:27.700516  108971 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0916 10:16:27.700627  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.700827  108971 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.701238  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.701289  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.701769  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.702610  108971 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0916 10:16:27.702838  108971 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0916 10:16:27.703086  108971 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.703554  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.703591  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.704245  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.705469  108971 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0916 10:16:27.705501  108971 master.go:461] Enabling API group "coordination.k8s.io".
I0916 10:16:27.705501  108971 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0916 10:16:27.705523  108971 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0916 10:16:27.705784  108971 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.706071  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.706098  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.706875  108971 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0916 10:16:27.706917  108971 master.go:461] Enabling API group "extensions".
I0916 10:16:27.706989  108971 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0916 10:16:27.707151  108971 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.707421  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.707448  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.708144  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.708449  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.708986  108971 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0916 10:16:27.709215  108971 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.709504  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.709536  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.709654  108971 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0916 10:16:27.711315  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.711363  108971 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0916 10:16:27.711409  108971 master.go:461] Enabling API group "networking.k8s.io".
I0916 10:16:27.711668  108971 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0916 10:16:27.711464  108971 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.712190  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.712510  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.713650  108971 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0916 10:16:27.713829  108971 master.go:461] Enabling API group "node.k8s.io".
I0916 10:16:27.713900  108971 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0916 10:16:27.714069  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.714299  108971 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.714728  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.715590  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.715151  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.717258  108971 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0916 10:16:27.717598  108971 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.717797  108971 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0916 10:16:27.717940  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.718017  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.718888  108971 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0916 10:16:27.718917  108971 master.go:461] Enabling API group "policy".
I0916 10:16:27.718964  108971 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.719213  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.719238  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.719325  108971 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0916 10:16:27.719383  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.720055  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.720389  108971 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0916 10:16:27.720417  108971 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0916 10:16:27.720611  108971 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.720903  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.720930  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.721274  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.722256  108971 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0916 10:16:27.722288  108971 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0916 10:16:27.722302  108971 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.722549  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.722641  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.723181  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.723746  108971 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0916 10:16:27.723864  108971 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0916 10:16:27.724151  108971 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.724462  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.724588  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.725170  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.726181  108971 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0916 10:16:27.726255  108971 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.726275  108971 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0916 10:16:27.727288  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.727470  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.728256  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.729421  108971 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0916 10:16:27.729569  108971 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0916 10:16:27.731490  108971 watch_cache.go:405] Replace watchCache (rev: 30383) 
I0916 10:16:27.731624  108971 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.733272  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.733440  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.738697  108971 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0916 10:16:27.738820  108971 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.739082  108971 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0916 10:16:27.739141  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.739174  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.740886  108971 watch_cache.go:405] Replace watchCache (rev: 30384) 
I0916 10:16:27.741582  108971 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0916 10:16:27.741964  108971 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.742199  108971 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0916 10:16:27.742224  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.742250  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.743577  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.743747  108971 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0916 10:16:27.743793  108971 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0916 10:16:27.743803  108971 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0916 10:16:27.744593  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.746849  108971 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.747201  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.747235  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.748212  108971 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0916 10:16:27.748271  108971 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0916 10:16:27.748433  108971 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.748694  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.748740  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.749670  108971 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0916 10:16:27.749697  108971 master.go:461] Enabling API group "scheduling.k8s.io".
I0916 10:16:27.749777  108971 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0916 10:16:27.749858  108971 master.go:450] Skipping disabled API group "settings.k8s.io".
I0916 10:16:27.751608  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.751796  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.753507  108971 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.753832  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.753870  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.755301  108971 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0916 10:16:27.755367  108971 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0916 10:16:27.756878  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.758034  108971 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.759435  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.759685  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.760945  108971 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0916 10:16:27.761006  108971 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.761291  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.761322  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.761426  108971 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0916 10:16:27.762350  108971 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0916 10:16:27.762415  108971 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.762539  108971 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0916 10:16:27.762704  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.762768  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.763860  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.764166  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.765347  108971 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0916 10:16:27.765385  108971 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0916 10:16:27.765583  108971 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.765998  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.766039  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.766848  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.767296  108971 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0916 10:16:27.767382  108971 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0916 10:16:27.767741  108971 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.768011  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.768040  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.768482  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.769485  108971 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0916 10:16:27.769535  108971 master.go:461] Enabling API group "storage.k8s.io".
I0916 10:16:27.769826  108971 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.769952  108971 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0916 10:16:27.770048  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.770081  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.772633  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.773543  108971 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0916 10:16:27.773798  108971 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.773873  108971 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0916 10:16:27.774087  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.774113  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.774872  108971 watch_cache.go:405] Replace watchCache (rev: 30385) 
I0916 10:16:27.775322  108971 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0916 10:16:27.775573  108971 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.775795  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.775822  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.776027  108971 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0916 10:16:27.779701  108971 watch_cache.go:405] Replace watchCache (rev: 30386) 
I0916 10:16:27.780253  108971 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0916 10:16:27.780497  108971 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.780756  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.780790  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.780897  108971 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0916 10:16:27.783134  108971 watch_cache.go:405] Replace watchCache (rev: 30386) 
I0916 10:16:27.784841  108971 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0916 10:16:27.785084  108971 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.785311  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.785339  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.785445  108971 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0916 10:16:27.786787  108971 watch_cache.go:405] Replace watchCache (rev: 30386) 
I0916 10:16:27.787101  108971 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0916 10:16:27.787137  108971 master.go:461] Enabling API group "apps".
I0916 10:16:27.787190  108971 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.787240  108971 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0916 10:16:27.787376  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.787400  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.788485  108971 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0916 10:16:27.788527  108971 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.788658  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.788676  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.788774  108971 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0916 10:16:27.789133  108971 watch_cache.go:405] Replace watchCache (rev: 30386) 
I0916 10:16:27.790282  108971 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0916 10:16:27.790366  108971 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.790552  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.790575  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.790679  108971 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0916 10:16:27.790829  108971 watch_cache.go:405] Replace watchCache (rev: 30386) 
I0916 10:16:27.793348  108971 watch_cache.go:405] Replace watchCache (rev: 30386) 
I0916 10:16:27.793889  108971 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0916 10:16:27.793942  108971 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.794030  108971 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0916 10:16:27.794120  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.794147  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.798464  108971 watch_cache.go:405] Replace watchCache (rev: 30386) 
I0916 10:16:27.798970  108971 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0916 10:16:27.798994  108971 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0916 10:16:27.799042  108971 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.799439  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:27.799468  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:27.799583  108971 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0916 10:16:27.802078  108971 watch_cache.go:405] Replace watchCache (rev: 30387) 
I0916 10:16:27.802204  108971 store.go:1342] Monitoring events count at <storage-prefix>//events
I0916 10:16:27.802233  108971 master.go:461] Enabling API group "events.k8s.io".
I0916 10:16:27.802428  108971 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0916 10:16:27.802501  108971 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.802771  108971 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.803119  108971 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.803244  108971 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.803356  108971 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.803465  108971 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.803575  108971 watch_cache.go:405] Replace watchCache (rev: 30388) 
I0916 10:16:27.803697  108971 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.803811  108971 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.803899  108971 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.804054  108971 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.805203  108971 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.805568  108971 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.818015  108971 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.818738  108971 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.823549  108971 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.824278  108971 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.825726  108971 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.826314  108971 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.828372  108971 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.828894  108971 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:16:27.829633  108971 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0916 10:16:27.832045  108971 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.832414  108971 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.834334  108971 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.837356  108971 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.838578  108971 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.839615  108971 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.842575  108971 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.844788  108971 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.847666  108971 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.848598  108971 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.849513  108971 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:16:27.849783  108971 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0916 10:16:27.850995  108971 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.851491  108971 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.852302  108971 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.853351  108971 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.854142  108971 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.855200  108971 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.856230  108971 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.857145  108971 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.857856  108971 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.858650  108971 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.859943  108971 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:16:27.860155  108971 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0916 10:16:27.860894  108971 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.861527  108971 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:16:27.861600  108971 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0916 10:16:27.862402  108971 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.863191  108971 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.863497  108971 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.864338  108971 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.865043  108971 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.865734  108971 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.866632  108971 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:16:27.866758  108971 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0916 10:16:27.867730  108971 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.868537  108971 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.869036  108971 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.871404  108971 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.871937  108971 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.872335  108971 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.873115  108971 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.873426  108971 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.873684  108971 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.874573  108971 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.874844  108971 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.875076  108971 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:16:27.875145  108971 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0916 10:16:27.875158  108971 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0916 10:16:27.876127  108971 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.876863  108971 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.877746  108971 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.878489  108971 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.879588  108971 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a8c7ff56-1fda-4fec-aef1-20ca56bd8e2a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:16:27.883978  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:27.884023  108971 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0916 10:16:27.884037  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:27.884051  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:27.884062  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:27.884075  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:27.884151  108971 httplog.go:90] GET /healthz: (332.339µs) 0 [Go-http-client/1.1 127.0.0.1:60332]
I0916 10:16:27.885481  108971 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.722536ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:27.890013  108971 httplog.go:90] GET /api/v1/services: (2.714932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:27.896478  108971 httplog.go:90] GET /api/v1/services: (1.643476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:27.899763  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:27.899793  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:27.899807  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:27.899817  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:27.899826  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:27.899856  108971 httplog.go:90] GET /healthz: (243.31µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60332]
I0916 10:16:27.901479  108971 httplog.go:90] GET /api/v1/services: (1.142366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60332]
I0916 10:16:27.903043  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.057027ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:27.904622  108971 httplog.go:90] GET /api/v1/services: (1.640079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60332]
I0916 10:16:27.905871  108971 httplog.go:90] POST /api/v1/namespaces: (2.40338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:27.907917  108971 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.377694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:27.910648  108971 httplog.go:90] POST /api/v1/namespaces: (2.114203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:27.912727  108971 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.533206ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:27.916343  108971 httplog.go:90] POST /api/v1/namespaces: (2.822314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:27.985667  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:27.985954  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:27.985999  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:27.986091  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:27.986127  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:27.986346  108971 httplog.go:90] GET /healthz: (862.034µs) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.000628  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.000698  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.000766  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.000777  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.000786  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.000824  108971 httplog.go:90] GET /healthz: (383.303µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.085182  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.085282  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.085299  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.085311  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.085320  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.085359  108971 httplog.go:90] GET /healthz: (346.507µs) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.100615  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.100649  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.100662  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.100683  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.100691  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.100748  108971 httplog.go:90] GET /healthz: (291.855µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.185300  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.185334  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.185347  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.185359  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.185368  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.185405  108971 httplog.go:90] GET /healthz: (306.232µs) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.200767  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.200818  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.200834  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.200845  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.200855  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.200896  108971 httplog.go:90] GET /healthz: (350µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.285423  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.285465  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.285480  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.285489  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.285498  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.285549  108971 httplog.go:90] GET /healthz: (407.184µs) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.300780  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.300819  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.300836  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.300848  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.300856  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.300903  108971 httplog.go:90] GET /healthz: (372.758µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.385056  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.385102  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.385115  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.385124  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.385132  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.385166  108971 httplog.go:90] GET /healthz: (293.7µs) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.400762  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.400805  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.400819  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.400830  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.400839  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.400878  108971 httplog.go:90] GET /healthz: (336.813µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.485120  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.485169  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.485184  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.485193  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.485202  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.485250  108971 httplog.go:90] GET /healthz: (332.232µs) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.500676  108971 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:16:28.500733  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.500747  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.500757  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.500765  108971 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.500809  108971 httplog.go:90] GET /healthz: (286.927µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.583617  108971 client.go:361] parsed scheme: "endpoint"
I0916 10:16:28.583737  108971 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:16:28.586890  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.586922  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.586934  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.586941  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.586986  108971 httplog.go:90] GET /healthz: (2.00507ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.601536  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.601570  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.601579  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.601586  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.601620  108971 httplog.go:90] GET /healthz: (1.144341ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.686472  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.686509  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.686520  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.686529  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.686583  108971 httplog.go:90] GET /healthz: (1.619224ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.701786  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.701822  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.701835  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.701844  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.701895  108971 httplog.go:90] GET /healthz: (1.358509ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.786216  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.786260  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.786273  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.786283  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.786338  108971 httplog.go:90] GET /healthz: (1.370679ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.801607  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.801640  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.801652  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.801662  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.801732  108971 httplog.go:90] GET /healthz: (1.199315ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.886103  108971 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.817895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.888256  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.168986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60332]
I0916 10:16:28.888689  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.888734  108971 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:16:28.888746  108971 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:16:28.888755  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:16:28.888796  108971 httplog.go:90] GET /healthz: (3.495101ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:28.889161  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.172725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60472]
I0916 10:16:28.889553  108971 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.918795ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.889791  108971 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0916 10:16:28.890955  108971 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.002084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.891036  108971 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.417583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60472]
I0916 10:16:28.892687  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.698378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60332]
I0916 10:16:28.893354  108971 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.917347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.893686  108971 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0916 10:16:28.893703  108971 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0916 10:16:28.894751  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.126484ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60332]
I0916 10:16:28.894984  108971 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (3.413271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.898165  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.573835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.900147  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.580574ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.901912  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.901946  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:28.901980  108971 httplog.go:90] GET /healthz: (736.267µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.902017  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.484039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.903596  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.275491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.905035  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (927.891µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.906330  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (978.26µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.907746  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (868.191µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.910082  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.021755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.910237  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0916 10:16:28.911422  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.067089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.913421  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.706022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.913913  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0916 10:16:28.915474  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.352731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.918236  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.181296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.918514  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0916 10:16:28.921381  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (2.5423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.924138  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.220606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.924519  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0916 10:16:28.925936  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.169229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.929400  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.96506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.929641  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0916 10:16:28.931796  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.830545ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.934888  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.375013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.935348  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0916 10:16:28.937638  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.026294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.940545  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.216753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.940911  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0916 10:16:28.942515  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.347803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.951062  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.184756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.954596  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0916 10:16:28.962200  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (7.128489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.966602  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.597885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.967450  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0916 10:16:28.969861  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.866596ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.972937  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.349301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.973262  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0916 10:16:28.974690  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.167235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.980078  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.965261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.980372  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0916 10:16:28.981928  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.296453ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.984951  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.285787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.985328  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0916 10:16:28.987312  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.363611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:28.987480  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:28.987520  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:28.987568  108971 httplog.go:90] GET /healthz: (2.41068ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:28.991371  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.083173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.991643  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0916 10:16:28.994584  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (2.632334ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.998056  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.767905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:28.998343  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0916 10:16:29.000413  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.617438ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.001615  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.001641  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.002095  108971 httplog.go:90] GET /healthz: (1.637072ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.003119  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.234803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.003369  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0916 10:16:29.005333  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.69469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.008667  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.62047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.010053  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0916 10:16:29.011737  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.359424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.015217  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.870996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.015800  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0916 10:16:29.022595  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (6.612645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.025878  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.607396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.026505  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0916 10:16:29.028296  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.370373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.030923  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.029244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.031291  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0916 10:16:29.032762  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.239661ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.035536  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.155091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.036298  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0916 10:16:29.039346  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.521561ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.043148  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.85005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.043746  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0916 10:16:29.045477  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.41961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.048620  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.427867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.049187  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0916 10:16:29.051082  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.489067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.053865  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.006129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.054100  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0916 10:16:29.056428  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.975009ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.063938  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.675503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.064373  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0916 10:16:29.066362  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.57636ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.070424  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.033347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.070823  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0916 10:16:29.072725  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.563687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.076949  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.700302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.077358  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0916 10:16:29.079307  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.59125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.082121  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.10649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.082626  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0916 10:16:29.085689  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (2.820301ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.089122  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.089157  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.089208  108971 httplog.go:90] GET /healthz: (3.342307ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:29.092366  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.289607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.092655  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0916 10:16:29.097023  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (4.095481ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.100612  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.783173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.100889  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0916 10:16:29.102157  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.055241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.102805  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.103082  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.103357  108971 httplog.go:90] GET /healthz: (2.973238ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.106423  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.603633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.106832  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0916 10:16:29.108751  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.626731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.111865  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.574573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.112338  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0916 10:16:29.114042  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.414599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.116995  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.46981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.117293  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0916 10:16:29.119352  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.315297ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.122875  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.774802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.123453  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0916 10:16:29.124983  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.220206ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.128114  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.485071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.128610  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0916 10:16:29.130457  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.44094ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.133103  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.98559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.133380  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0916 10:16:29.134889  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.246438ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.137413  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.905193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.137793  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0916 10:16:29.139543  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.476237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.142585  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.362329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.142969  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0916 10:16:29.144637  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.38913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.147618  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.353438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.148115  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0916 10:16:29.149633  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.155705ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.186856  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.186890  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.186938  108971 httplog.go:90] GET /healthz: (2.134325ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:29.197405  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (17.266502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.197742  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0916 10:16:29.200289  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (2.325664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.203678  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.203704  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.203760  108971 httplog.go:90] GET /healthz: (1.064877ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.204898  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.069228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.205132  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0916 10:16:29.206806  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.391319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.217962  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.844085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.220587  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0916 10:16:29.223831  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (2.903865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.229921  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.011749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.230205  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0916 10:16:29.233170  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (2.58294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.238601  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.395418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.280028  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0916 10:16:29.281493  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.118536ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.287912  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.287951  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.287998  108971 httplog.go:90] GET /healthz: (3.107102ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:29.291131  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.822918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.291477  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0916 10:16:29.293739  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.965054ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.298584  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.389574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.298971  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0916 10:16:29.301433  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.301468  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.301510  108971 httplog.go:90] GET /healthz: (1.077747ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.301628  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (2.382794ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.304345  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.122929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.304634  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0916 10:16:29.307819  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (2.839072ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.310584  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.201222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.310997  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0916 10:16:29.315383  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (4.126057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.319506  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.047658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.319816  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0916 10:16:29.321179  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.062615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.324159  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.5296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.324554  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0916 10:16:29.327073  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (2.214993ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.338638  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.555199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.339109  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0916 10:16:29.340612  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.248008ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.343949  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.731549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.344367  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0916 10:16:29.346565  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.6588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.350028  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.635076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.350312  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0916 10:16:29.351623  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.102055ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.354548  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.4179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.354799  108971 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0916 10:16:29.357020  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.887956ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.362384  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.838626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.362692  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0916 10:16:29.366768  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (3.723296ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.370779  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.307796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.371130  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0916 10:16:29.373241  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.886472ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.376100  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.317039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.376391  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0916 10:16:29.377905  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.2017ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.381290  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.934096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.381742  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0916 10:16:29.383194  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.190102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.386058  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.932426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.386105  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.386130  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.386258  108971 httplog.go:90] GET /healthz: (1.598023ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:29.386331  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0916 10:16:29.401696  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.401797  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.401845  108971 httplog.go:90] GET /healthz: (1.278092ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.405589  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.252732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.427820  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.157546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.428106  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0916 10:16:29.451134  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (3.00582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.472370  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.513405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.472701  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0916 10:16:29.485641  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.299276ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.488124  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.488173  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.488224  108971 httplog.go:90] GET /healthz: (3.138565ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:29.502030  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.502074  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.502140  108971 httplog.go:90] GET /healthz: (1.479428ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.507402  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.037267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.507729  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0916 10:16:29.525773  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.433856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.547849  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.525845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.548164  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0916 10:16:29.566103  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.704891ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.586809  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.586850  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.586890  108971 httplog.go:90] GET /healthz: (1.432696ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:29.587845  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.482271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.588139  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0916 10:16:29.601850  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.601887  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.601925  108971 httplog.go:90] GET /healthz: (1.412008ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.606197  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.930907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.627740  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.823181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.628071  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0916 10:16:29.673842  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (29.352745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.679250  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.715085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.679626  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0916 10:16:29.687457  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.976112ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.687973  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.687994  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.688042  108971 httplog.go:90] GET /healthz: (2.952721ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:29.707185  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.707225  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.707284  108971 httplog.go:90] GET /healthz: (6.529685ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.709169  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.588253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.709508  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0916 10:16:29.730847  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (6.579522ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.748674  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.424953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.749065  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0916 10:16:29.765873  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.588585ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.786582  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.786629  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.786682  108971 httplog.go:90] GET /healthz: (1.722979ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:29.787647  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.206272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.788318  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0916 10:16:29.802837  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.802875  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.802940  108971 httplog.go:90] GET /healthz: (2.293159ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.805914  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.722324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.827985  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.569947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.828310  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0916 10:16:29.845346  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.154484ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.867083  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.764655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.867429  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0916 10:16:29.885691  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.885765  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.885811  108971 httplog.go:90] GET /healthz: (898.08µs) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:29.885697  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.539699ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:29.902356  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.902459  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.902554  108971 httplog.go:90] GET /healthz: (1.628703ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.906453  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.271566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.906848  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0916 10:16:29.925782  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.402533ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.946971  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.6872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.947229  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0916 10:16:29.981233  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (16.825212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.985801  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:29.985835  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:29.985874  108971 httplog.go:90] GET /healthz: (1.02721ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:29.991437  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.664413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:29.991756  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0916 10:16:30.001658  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.001700  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.002041  108971 httplog.go:90] GET /healthz: (1.555944ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.006297  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.69122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.028132  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.821068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.028844  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0916 10:16:30.045868  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.553982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.066482  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.185397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.067075  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0916 10:16:30.085979  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.086012  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.086047  108971 httplog.go:90] GET /healthz: (951.683µs) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:30.086123  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.767968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.101662  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.101697  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.101774  108971 httplog.go:90] GET /healthz: (1.229415ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.107443  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.178995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.107780  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0916 10:16:30.125938  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.65111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.146970  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.710381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.147301  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0916 10:16:30.165979  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.753214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.186278  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.186329  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.186384  108971 httplog.go:90] GET /healthz: (1.465933ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:30.188366  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.939121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.188652  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0916 10:16:30.201725  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.201773  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.201829  108971 httplog.go:90] GET /healthz: (1.29352ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.205970  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.71576ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.227111  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.762182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.227433  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0916 10:16:30.245900  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.593506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.268031  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.623194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.268439  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0916 10:16:30.285917  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.621311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.286377  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.286414  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.286449  108971 httplog.go:90] GET /healthz: (1.391727ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:30.302082  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.302128  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.302195  108971 httplog.go:90] GET /healthz: (1.674795ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.307241  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.871069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.307675  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0916 10:16:30.325934  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.664467ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.346571  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.293356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.347031  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0916 10:16:30.366264  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (2.008517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.387625  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.338259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.388414  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0916 10:16:30.390320  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.390358  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.390425  108971 httplog.go:90] GET /healthz: (2.587658ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:30.403327  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.403374  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.403431  108971 httplog.go:90] GET /healthz: (1.997792ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.405466  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.331632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.427191  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.905735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.427700  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0916 10:16:30.447046  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (2.833828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.466925  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.707986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.467196  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0916 10:16:30.487031  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.781082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.487275  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.487300  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.487339  108971 httplog.go:90] GET /healthz: (2.332188ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:30.501792  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.501834  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.501882  108971 httplog.go:90] GET /healthz: (1.389208ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.507096  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.850208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.507416  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0916 10:16:30.529918  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.703817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.547068  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.919445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.548067  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0916 10:16:30.566147  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.885131ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.586548  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.586590  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.586639  108971 httplog.go:90] GET /healthz: (1.552143ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:30.587418  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.079251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.587624  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0916 10:16:30.603164  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.603201  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.603260  108971 httplog.go:90] GET /healthz: (1.561695ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.605632  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.435992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.628347  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.028132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.628897  108971 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0916 10:16:30.646045  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.773732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.648531  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.926428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.667272  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.917585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.667676  108971 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0916 10:16:30.686484  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.686528  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.686573  108971 httplog.go:90] GET /healthz: (1.516453ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:30.686582  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.263159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.689098  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.809693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.701695  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.701749  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.701795  108971 httplog.go:90] GET /healthz: (1.244963ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.706916  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.697628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.707503  108971 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0916 10:16:30.726110  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.804057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.728902  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.83658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.747442  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.0061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.747822  108971 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0916 10:16:30.765855  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.578416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.768410  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.061951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:30.791059  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.791096  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.791144  108971 httplog.go:90] GET /healthz: (1.85016ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:30.792975  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.509276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.793400  108971 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0916 10:16:30.801856  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.801898  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.801946  108971 httplog.go:90] GET /healthz: (1.37688ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.805637  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.496203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.807678  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.520199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.830469  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.175705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.830820  108971 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0916 10:16:30.846154  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.870128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.858373  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (11.61187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.866321  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.065783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.866571  108971 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0916 10:16:30.887696  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.887751  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.887803  108971 httplog.go:90] GET /healthz: (3.044147ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:30.887805  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (3.55779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.891011  108971 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.762979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.901373  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.901405  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.901450  108971 httplog.go:90] GET /healthz: (975.54µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.906838  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.59296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.907196  108971 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0916 10:16:30.926518  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (2.128076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.929144  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.877676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.947020  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.770706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.947380  108971 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0916 10:16:30.966473  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.150539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.971703  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.692329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.993794  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:30.993827  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:30.993867  108971 httplog.go:90] GET /healthz: (2.570003ms) 0 [Go-http-client/1.1 127.0.0.1:60334]
I0916 10:16:30.994005  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.658234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:30.994252  108971 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0916 10:16:31.010750  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (4.375832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.010950  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:31.010968  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:31.010998  108971 httplog.go:90] GET /healthz: (4.094524ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:31.015975  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.395596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.027804  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.481626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.028157  108971 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0916 10:16:31.046530  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.871725ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.051343  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.962383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.075264  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (7.274821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.078576  108971 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0916 10:16:31.086364  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:31.086397  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:31.086444  108971 httplog.go:90] GET /healthz: (1.747198ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:31.086604  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.420344ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.090768  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.674572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.103211  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:31.103270  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:31.103345  108971 httplog.go:90] GET /healthz: (2.848612ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.107845  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.212243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.108268  108971 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0916 10:16:31.125798  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.634527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.128429  108971 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.058203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.158782  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (14.580694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.159089  108971 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0916 10:16:31.168090  108971 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (3.831058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.170538  108971 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.812404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.197626  108971 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:16:31.197675  108971 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:16:31.197755  108971 httplog.go:90] GET /healthz: (12.917542ms) 0 [Go-http-client/1.1 127.0.0.1:60474]
I0916 10:16:31.197897  108971 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (13.634643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.198213  108971 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0916 10:16:31.203340  108971 httplog.go:90] GET /healthz: (2.490324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.208839  108971 httplog.go:90] GET /api/v1/namespaces/default: (5.017861ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.214695  108971 httplog.go:90] POST /api/v1/namespaces: (5.271443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.217452  108971 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.323546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.229134  108971 httplog.go:90] POST /api/v1/namespaces/default/services: (11.084502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.231973  108971 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.035294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.243094  108971 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (10.546531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.286442  108971 httplog.go:90] GET /healthz: (1.135232ms) 200 [Go-http-client/1.1 127.0.0.1:60334]
W0916 10:16:31.287350  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287414  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287428  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287461  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287482  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287501  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287512  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287526  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287547  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287559  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:16:31.287628  108971 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0916 10:16:31.287658  108971 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0916 10:16:31.287669  108971 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0916 10:16:31.287987  108971 shared_informer.go:197] Waiting for caches to sync for scheduler
I0916 10:16:31.288268  108971 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0916 10:16:31.288291  108971 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0916 10:16:31.289447  108971 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (786.743µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:16:31.290516  108971 get.go:251] Starting watch for /api/v1/pods, rv=30379 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m48s
E0916 10:16:31.384954  108971 factory.go:590] Error getting pod permit-plugin53c55daa-9e06-4a6e-b4bf-1361e321173f/test-pod for retry: Get http://127.0.0.1:37571/api/v1/namespaces/permit-plugin53c55daa-9e06-4a6e-b4bf-1361e321173f/pods/test-pod: dial tcp 127.0.0.1:37571: connect: connection refused; retrying...
I0916 10:16:31.388156  108971 shared_informer.go:227] caches populated
I0916 10:16:31.388192  108971 shared_informer.go:204] Caches are synced for scheduler 
I0916 10:16:31.388563  108971 reflector.go:120] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.388586  108971 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.388701  108971 reflector.go:120] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.388730  108971 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.388843  108971 reflector.go:120] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.388863  108971 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.388995  108971 reflector.go:120] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.389010  108971 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.389295  108971 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.389312  108971 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.389355  108971 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.389370  108971 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.389695  108971 reflector.go:120] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.389732  108971 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.390079  108971 reflector.go:120] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.390093  108971 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.390172  108971 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (834.424µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60702]
I0916 10:16:31.390270  108971 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (426.297µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:31.388565  108971 reflector.go:120] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.390291  108971 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.389700  108971 reflector.go:120] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.390887  108971 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0916 10:16:31.391271  108971 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (416.798µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60708]
I0916 10:16:31.391341  108971 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (514.053µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60706]
I0916 10:16:31.391705  108971 get.go:251] Starting watch for /api/v1/services, rv=30745 labels= fields= timeout=8m24s
I0916 10:16:31.391811  108971 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (369.903µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60702]
I0916 10:16:31.392087  108971 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30385 labels= fields= timeout=9m35s
I0916 10:16:31.392362  108971 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (383.698µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:16:31.392453  108971 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (626.383µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60716]
I0916 10:16:31.392451  108971 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30386 labels= fields= timeout=5m18s
I0916 10:16:31.392904  108971 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30379 labels= fields= timeout=9m50s
I0916 10:16:31.392949  108971 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30383 labels= fields= timeout=9m22s
I0916 10:16:31.393043  108971 get.go:251] Starting watch for /api/v1/nodes, rv=30379 labels= fields= timeout=6m29s
I0916 10:16:31.393211  108971 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30385 labels= fields= timeout=6m34s
I0916 10:16:31.393435  108971 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (415.963µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60714]
I0916 10:16:31.393500  108971 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (356.806µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60712]
I0916 10:16:31.394077  108971 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30386 labels= fields= timeout=6m26s
I0916 10:16:31.394087  108971 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30380 labels= fields= timeout=6m37s
I0916 10:16:31.394383  108971 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (2.999137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60710]
I0916 10:16:31.396422  108971 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30379 labels= fields= timeout=9m39s
I0916 10:16:31.488519  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488565  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488573  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488579  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488585  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488591  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488598  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488605  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488611  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488622  108971 shared_informer.go:227] caches populated
I0916 10:16:31.488633  108971 shared_informer.go:227] caches populated
I0916 10:16:31.493677  108971 httplog.go:90] POST /api/v1/nodes: (4.205139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:31.494512  108971 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0916 10:16:31.499302  108971 httplog.go:90] PUT /api/v1/nodes/testnode/status: (4.316845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:31.503915  108971 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods: (4.053168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:31.504508  108971 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pidpressure-fake-name
I0916 10:16:31.504524  108971 scheduler.go:530] Attempting to schedule pod: node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pidpressure-fake-name
I0916 10:16:31.504676  108971 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pidpressure-fake-name", node "testnode"
I0916 10:16:31.504694  108971 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0916 10:16:31.504773  108971 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0916 10:16:31.509680  108971 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name/binding: (4.583101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:31.510018  108971 scheduler.go:662] pod node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0916 10:16:31.513597  108971 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/events: (2.9751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:31.606657  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.860042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:31.707286  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.44281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:31.806622  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.932628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:31.906816  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.050308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.007037  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.288806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.107113  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.412377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.206616  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.938684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.306620  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.923479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.391429  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:32.391743  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:32.392765  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:32.392955  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:32.393165  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:32.394980  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:32.407171  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.329448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.506977  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.242011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.606656  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.96223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.709554  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.271533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.806733  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.987863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:32.906643  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.946547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.006594  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.886008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.123884  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.989997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.206888  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.064112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.307236  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.42585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.391936  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:33.392086  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:33.392917  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:33.393137  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:33.393380  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:33.395478  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:33.410685  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.591448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.506975  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.134849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.607309  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.438529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.706980  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.199152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.808951  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.842379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:33.907874  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.745924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.013762  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (5.319557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.107168  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.25309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.206813  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.11055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.307206  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.421457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.392143  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:34.392263  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:34.393065  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:34.393323  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:34.393531  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:34.395669  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:34.408290  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.74094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.509645  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (4.772541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.637653  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.293799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.708443  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.472624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.807072  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.340294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:34.909437  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.108723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.007290  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.422285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.108232  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.411367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.207080  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.286983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.310440  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.100584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.392420  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:35.393224  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:35.393269  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:35.393459  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:35.393777  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:35.395794  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:35.414219  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (8.938104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.507047  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.312961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.614102  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (9.327192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.708261  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.265141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.807472  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.636936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:35.906770  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.096386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.007184  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.467867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.106807  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.086827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.207180  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.44656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.306856  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.17152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.392755  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:36.393466  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:36.393498  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:36.393678  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:36.393912  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:36.395957  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:36.406950  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.179247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.506873  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.03079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.609155  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (4.007577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.707078  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.24097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.806914  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.135573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:36.906961  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.206186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.007213  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.96169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.106855  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.132305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.209378  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.593836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.306977  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.094936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.393382  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:37.393620  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:37.393665  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:37.393750  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:37.394262  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:37.396120  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:37.406655  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.96102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.506737  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.945052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.606860  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.136822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.707076  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.335513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.807219  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.518742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:37.907040  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.234169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
E0916 10:16:37.968001  108971 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37571/apis/events.k8s.io/v1beta1/namespaces/permit-plugin53c55daa-9e06-4a6e-b4bf-1361e321173f/events: dial tcp 127.0.0.1:37571: connect: connection refused' (may retry after sleeping)
I0916 10:16:38.007297  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.48704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:38.106703  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.977749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:38.207204  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.379583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:38.306984  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.248993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:38.393605  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:38.393779  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:38.393799  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:38.394196  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:38.394367  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:38.396267  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:38.410049  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.075502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:38.506946  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.216161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:38.607288  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.434379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:38.707258  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.341789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:38.806889  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.187411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:38.907406  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.480226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.007539  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.763402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.107010  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.973602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.207033  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.240358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.306972  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.206107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.393784  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:39.394240  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:39.394281  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:39.394413  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:39.395086  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:39.396429  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:39.407511  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.720291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.506909  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.226314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.607259  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.562184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.707311  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.459351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.807160  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.224513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:39.907327  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.490978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.007117  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.382606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.110118  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.659049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.207073  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.282315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.307523  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.886114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.393991  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:40.394481  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:40.394520  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:40.394676  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:40.395747  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:40.396629  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:40.407455  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.737439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.508809  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (4.059502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.607083  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.386561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.707388  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.479331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.807450  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.30128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:40.906879  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.052909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.006892  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.091419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.106849  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.085431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.207433  108971 httplog.go:90] GET /api/v1/namespaces/default: (3.1869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.209145  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.576186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:41.210542  108971 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.125847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.213285  108971 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.26559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.307309  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.416882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.394236  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:41.394736  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:41.394792  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:41.394866  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:41.396209  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:41.396783  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:41.407193  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.263977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.507427  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.669685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.607344  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.588136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.707023  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.939237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.806432  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.825783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:41.906854  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.15878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.006690  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.999429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.107004  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.150971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.206964  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.283535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.307150  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.371992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.394463  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:42.394930  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:42.394973  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:42.395082  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:42.396794  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:42.396951  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:42.407434  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.648192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.506692  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.019739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.607029  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.205613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.707543  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.736232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.807174  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.451091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:42.907124  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.314869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.007005  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.364854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.107217  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.332848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.208682  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.161292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.307095  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.322388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.394685  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:43.395076  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:43.395156  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:43.395335  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:43.397105  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:43.397176  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:43.407168  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.33296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.506768  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.014633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.607290  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.5023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.706912  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.195464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.813024  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (8.22896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:43.908252  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.819362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.006917  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.109199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.107245  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.541061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.207042  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.33865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.306909  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.103599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.395335  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:44.395391  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:44.395496  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:44.396021  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:44.397298  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:44.397313  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:44.406746  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.975025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.507032  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.232436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.607254  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.433791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.707691  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.64645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.807565  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.787689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:44.907418  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.654005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.006949  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.241215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.106770  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.043429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.206706  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.023996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.306936  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.145839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.395548  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:45.395654  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:45.395761  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:45.396339  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:45.397447  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:45.397493  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:45.416443  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.534489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.517667  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.479089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.606844  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.053039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.707161  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.371599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.806983  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.258611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:45.907005  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.129944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.015798  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.116443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.114303  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (6.013854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.209518  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.170963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.307391  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.628585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.395769  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:46.395911  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:46.395931  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:46.396807  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:46.397556  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:46.397578  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:46.406795  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.012817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.507404  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.583083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.607367  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.532603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.706767  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.910131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.807465  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.703225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:46.907264  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.328815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.007035  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.200422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.106949  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.294388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.207498  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.188052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.306878  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.167509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.396274  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:47.396337  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:47.396466  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:47.397026  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:47.397667  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:47.397695  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:47.406986  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.285156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.525829  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (21.025238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.607384  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.590981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.707033  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.325374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.807127  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.397407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:47.906789  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.081888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.006822  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.093284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.106667  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.905673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.207239  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.383334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.307020  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.220319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.396490  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:48.396563  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:48.396723  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:48.397179  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:48.399176  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:48.399227  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:48.409290  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.34032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.512363  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.535364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.607005  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.24982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.706825  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.073566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.807021  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.324361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:48.907150  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.315055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.007300  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.507509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.106773  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.08667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.206988  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.262064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.306978  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.14038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.396742  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:49.396804  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:49.396909  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:49.397381  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:49.399343  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:49.399418  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:49.406758  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.033616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.507017  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.233327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.606816  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.019092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.707277  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.442874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.806598  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.839533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:49.907131  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.203302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:50.006924  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.158442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
E0916 10:16:50.047742  108971 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37571/apis/events.k8s.io/v1beta1/namespaces/permit-plugin53c55daa-9e06-4a6e-b4bf-1361e321173f/events: dial tcp 127.0.0.1:37571: connect: connection refused' (may retry after sleeping)
I0916 10:16:50.106815  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.006958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:50.206928  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.229709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:50.306827  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.107548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:50.396947  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:50.397010  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:50.397097  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:50.397549  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:50.399637  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:50.399678  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:50.407518  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.75645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:50.506792  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.989669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:50.606659  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.943986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:50.707398  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.623132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:50.807020  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.21211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:50.906571  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.798052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:51.006840  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.177658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:51.109744  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (5.017868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:51.207282  108971 httplog.go:90] GET /api/v1/namespaces/default: (3.040281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:51.209756  108971 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.996649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:16:51.209756  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.456403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:51.211912  108971 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.528255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:51.306686  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.689263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:51.397184  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:51.397239  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:51.397351  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:51.397749  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:51.399789  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:51.399841  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:51.413610  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (8.900222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:51.507069  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.241585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:51.609273  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (4.551411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:51.707381  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.541102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:51.807230  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.471782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:51.907157  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.401357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.007204  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.346104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.106913  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.292295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.209122  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (4.310585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.307948  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.143644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.397464  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:52.397547  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:52.397673  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:52.397941  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:52.399981  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:52.400035  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:52.407063  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.223794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.506738  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.079767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.607607  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.733671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.708004  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.106483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.806342  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.6895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:52.906878  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.931694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.010317  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.681525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.106411  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.659032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.206830  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.91097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.306485  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.780991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.397756  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:53.397827  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:53.397961  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:53.398108  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:53.400168  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:53.400208  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:53.407185  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.391954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.507327  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.520415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.607259  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.493489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.707509  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.747914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.807222  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.545011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:53.907677  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.957305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.006785  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.939545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.107778  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.898109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.208912  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (4.133619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.307197  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.392735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.397998  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:54.398218  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:54.398267  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:54.398277  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:54.400303  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:54.400303  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:54.407173  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.373765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.507195  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.34401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.607152  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.349532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.706885  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.166439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.806641  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.888994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:54.906632  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.885526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.006673  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.964234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.107124  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.391902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.206885  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.119875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.318669  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (9.331496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.398232  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:55.398369  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:55.398424  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:55.398425  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:55.400439  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:55.400482  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:55.407331  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.613986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.518238  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (13.393902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.608171  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.172915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.708084  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.302956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.807626  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.690694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:55.907465  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.681876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.007673  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.936493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.108533  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.527802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.207767  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.853852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.307202  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.380812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.398431  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:56.398545  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:56.398879  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:56.398918  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:56.400660  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:56.400721  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:56.406819  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.946707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.507596  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.763396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.616818  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.405086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.723272  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.011851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.807267  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.383695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:56.907168  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.384394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
E0916 10:16:56.985901  108971 factory.go:590] Error getting pod permit-plugin53c55daa-9e06-4a6e-b4bf-1361e321173f/test-pod for retry: Get http://127.0.0.1:37571/api/v1/namespaces/permit-plugin53c55daa-9e06-4a6e-b4bf-1361e321173f/pods/test-pod: dial tcp 127.0.0.1:37571: connect: connection refused; retrying...
I0916 10:16:57.007050  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.250118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:57.106994  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.069217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:57.206788  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.002852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:57.306764  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.08689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:57.398961  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:57.398980  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:57.399046  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:57.399084  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:57.400849  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:57.400894  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:57.417244  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (12.483482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:57.506664  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.876062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:57.607014  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.235037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:57.707540  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.624279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:57.807385  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.516539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:57.907282  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.488785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.006870  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.051851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.107830  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.030313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.207364  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.532592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.307048  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.254128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.399143  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:58.399159  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:58.399315  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:58.399338  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:58.401049  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:58.401086  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:58.407180  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.337629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.506734  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.838543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.606666  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.96259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.707254  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.412489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.806821  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.024343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:58.907684  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.893909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.007351  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.463142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.107281  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.445624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.206835  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.066254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.306884  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.21621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.399369  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:59.399463  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:59.399544  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:59.399607  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:59.401220  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:59.401284  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:16:59.407958  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.239474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.507451  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.620552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.606791  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.07944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.707184  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.386662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.807027  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.227736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:16:59.907209  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.359381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:00.006838  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.126782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:00.107223  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.269447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:00.207386  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.506185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
E0916 10:17:00.218639  108971 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37571/apis/events.k8s.io/v1beta1/namespaces/permit-plugin53c55daa-9e06-4a6e-b4bf-1361e321173f/events: dial tcp 127.0.0.1:37571: connect: connection refused' (may retry after sleeping)
I0916 10:17:00.306817  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.04573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:00.399647  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:00.399904  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:00.400023  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:00.400216  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:00.402922  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:00.402974  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:00.406547  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.873266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:00.506702  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.979369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:00.606383  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.676234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:00.709307  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (4.328843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:00.806833  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.032521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:00.906889  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.078665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.007226  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.220547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.106770  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (1.891959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.208282  108971 httplog.go:90] GET /api/v1/namespaces/default: (3.180542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.208843  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (3.385141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60724]
I0916 10:17:01.211202  108971 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.758079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.213137  108971 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.477724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.306850  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.098955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.399855  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:01.400162  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:01.400293  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:01.401027  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:01.403085  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:01.403127  108971 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:17:01.407436  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.478477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.507699  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.77227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.510674  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.351008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.519967  108971 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (8.683207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.526193  108971 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure823c0ba1-0e8b-4924-83b5-65f917a17088/pods/pidpressure-fake-name: (2.248271ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
E0916 10:17:01.527397  108971 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0916 10:17:01.528618  108971 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30745&timeout=8m24s&timeoutSeconds=504&watch=true: (30.137303385s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60704]
I0916 10:17:01.528685  108971 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30386&timeout=5m18s&timeoutSeconds=318&watch=true: (30.136492889s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60702]
I0916 10:17:01.528618  108971 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30379&timeout=9m50s&timeoutSeconds=590&watch=true: (30.136071532s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60706]
I0916 10:17:01.528780  108971 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30380&timeout=6m37s&timeoutSeconds=397&watch=true: (30.134928338s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60714]
I0916 10:17:01.528687  108971 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30385&timeout=6m34s&timeoutSeconds=394&watch=true: (30.135802001s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60718]
I0916 10:17:01.528618  108971 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30383&timeout=9m22s&timeoutSeconds=562&watch=true: (30.135895058s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60708]
I0916 10:17:01.528913  108971 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30385&timeout=9m35s&timeoutSeconds=575&watch=true: (30.137076032s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60720]
I0916 10:17:01.528917  108971 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30386&timeout=6m26s&timeoutSeconds=386&watch=true: (30.135084969s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60712]
I0916 10:17:01.528953  108971 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30379&timeout=9m39s&timeoutSeconds=579&watch=true: (30.132873646s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60710]
I0916 10:17:01.528990  108971 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30379&timeout=6m29s&timeoutSeconds=389&watch=true: (30.136205724s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60474]
I0916 10:17:01.529103  108971 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30379&timeoutSeconds=408&watch=true: (30.239073087s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60334]
I0916 10:17:01.538349  108971 httplog.go:90] DELETE /api/v1/nodes: (9.206518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.539010  108971 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0916 10:17:01.542213  108971 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.71756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
I0916 10:17:01.545144  108971 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.355094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32894]
--- FAIL: TestNodePIDPressure (33.97s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190916-100815.xml

Find permit-plugin53c55daa-9e06-4a6e-b4bf-1361e321173f/test-pod mentions in log files | View test history on testgrid


Show 2862 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 889 lines ...
W0916 10:03:07.072] I0916 10:03:06.973926   53026 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0916 10:03:07.072] I0916 10:03:06.974013   53026 taint_manager.go:162] Sending events to api server.
W0916 10:03:07.073] I0916 10:03:06.974076   53026 node_lifecycle_controller.go:453] Controller will reconcile labels.
W0916 10:03:07.073] I0916 10:03:06.974094   53026 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0916 10:03:07.073] I0916 10:03:06.974118   53026 controllermanager.go:534] Started "nodelifecycle"
W0916 10:03:07.073] I0916 10:03:06.974390   53026 node_lifecycle_controller.go:77] Sending events to api server
W0916 10:03:07.074] E0916 10:03:06.974461   53026 core.go:200] failed to start cloud node lifecycle controller: no cloud provider provided
W0916 10:03:07.074] W0916 10:03:06.974471   53026 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0916 10:03:07.074] I0916 10:03:06.974794   53026 node_lifecycle_controller.go:488] Starting node controller
W0916 10:03:07.075] I0916 10:03:06.974833   53026 shared_informer.go:197] Waiting for caches to sync for taint
W0916 10:03:07.075] W0916 10:03:06.975029   53026 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0916 10:03:07.075] I0916 10:03:06.975991   53026 controllermanager.go:534] Started "attachdetach"
W0916 10:03:07.076] W0916 10:03:06.976024   53026 controllermanager.go:526] Skipping "ttl-after-finished"
W0916 10:03:07.076] I0916 10:03:06.976663   53026 controllermanager.go:534] Started "horizontalpodautoscaling"
W0916 10:03:07.076] W0916 10:03:06.976684   53026 controllermanager.go:513] "bootstrapsigner" is disabled
W0916 10:03:07.076] W0916 10:03:06.976691   53026 controllermanager.go:526] Skipping "nodeipam"
W0916 10:03:07.077] I0916 10:03:06.976704   53026 attach_detach_controller.go:334] Starting attach detach controller
W0916 10:03:07.077] I0916 10:03:06.976777   53026 shared_informer.go:197] Waiting for caches to sync for attach detach
W0916 10:03:07.077] E0916 10:03:06.977009   53026 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0916 10:03:07.078] W0916 10:03:06.977023   53026 controllermanager.go:526] Skipping "service"
W0916 10:03:07.078] I0916 10:03:06.977269   53026 horizontal.go:156] Starting HPA controller
W0916 10:03:07.078] I0916 10:03:06.977387   53026 shared_informer.go:197] Waiting for caches to sync for HPA
W0916 10:03:07.079] I0916 10:03:06.977486   53026 controllermanager.go:534] Started "persistentvolume-binder"
W0916 10:03:07.079] I0916 10:03:06.977667   53026 pv_controller_base.go:282] Starting persistent volume controller
W0916 10:03:07.079] I0916 10:03:06.977818   53026 shared_informer.go:197] Waiting for caches to sync for persistent volume
... skipping 30 lines ...
W0916 10:03:07.088] I0916 10:03:06.995405   53026 controllermanager.go:534] Started "persistentvolume-expander"
W0916 10:03:07.089] I0916 10:03:06.995532   53026 expand_controller.go:300] Starting expand controller
W0916 10:03:07.089] I0916 10:03:06.995558   53026 shared_informer.go:197] Waiting for caches to sync for expand
W0916 10:03:07.089] I0916 10:03:06.997801   53026 controllermanager.go:534] Started "pv-protection"
W0916 10:03:07.090] I0916 10:03:06.999194   53026 pv_protection_controller.go:81] Starting PV protection controller
W0916 10:03:07.090] I0916 10:03:06.999293   53026 shared_informer.go:197] Waiting for caches to sync for PV protection
W0916 10:03:07.090] W0916 10:03:07.027970   53026 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0916 10:03:07.091] I0916 10:03:07.051457   53026 shared_informer.go:204] Caches are synced for GC 
W0916 10:03:07.091] I0916 10:03:07.052224   53026 shared_informer.go:204] Caches are synced for ReplicationController 
W0916 10:03:07.091] I0916 10:03:07.055676   53026 shared_informer.go:204] Caches are synced for disruption 
W0916 10:03:07.092] I0916 10:03:07.056093   53026 disruption.go:341] Sending events to api server.
W0916 10:03:07.092] I0916 10:03:07.055832   53026 shared_informer.go:204] Caches are synced for TTL 
W0916 10:03:07.092] I0916 10:03:07.073858   53026 shared_informer.go:204] Caches are synced for job 
... skipping 94 lines ...
I0916 10:03:10.752] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:03:10.755] +++ command: run_RESTMapper_evaluation_tests
I0916 10:03:10.765] +++ [0916 10:03:10] Creating namespace namespace-1568628190-6656
I0916 10:03:10.842] namespace/namespace-1568628190-6656 created
I0916 10:03:10.915] Context "test" modified.
I0916 10:03:10.922] +++ [0916 10:03:10] Testing RESTMapper
I0916 10:03:11.029] +++ [0916 10:03:11] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0916 10:03:11.043] +++ exit code: 0
I0916 10:03:11.162] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0916 10:03:11.162] bindings                                                                      true         Binding
I0916 10:03:11.163] componentstatuses                 cs                                          false        ComponentStatus
I0916 10:03:11.163] configmaps                        cm                                          true         ConfigMap
I0916 10:03:11.163] endpoints                         ep                                          true         Endpoints
... skipping 609 lines ...
I0916 10:03:31.323] (Bcore.sh:229: Successful get configmaps --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
I0916 10:03:31.397] (Bconfigmap/test-configmap created
I0916 10:03:31.495] core.sh:235: Successful get configmap/test-configmap --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-configmap
I0916 10:03:31.571] (Bpoddisruptionbudget.policy/test-pdb-1 created
I0916 10:03:31.665] core.sh:241: Successful get pdb/test-pdb-1 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 2
I0916 10:03:31.742] (Bpoddisruptionbudget.policy/test-pdb-2 created
W0916 10:03:31.843] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0916 10:03:31.844] error: setting 'all' parameter but found a non empty selector. 
W0916 10:03:31.844] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 10:03:31.845] I0916 10:03:31.567839   49471 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
I0916 10:03:31.945] core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
I0916 10:03:31.946] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0916 10:03:32.048] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0916 10:03:32.129] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0916 10:03:32.235] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0916 10:03:32.417] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:03:32.626] (Bpod/env-test-pod created
W0916 10:03:32.727] error: min-available and max-unavailable cannot be both specified
I0916 10:03:32.854] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0916 10:03:32.855] Name:         env-test-pod
I0916 10:03:32.855] Namespace:    test-kubectl-describe-pod
I0916 10:03:32.855] Priority:     0
I0916 10:03:32.855] Node:         <none>
I0916 10:03:32.855] Labels:       <none>
... skipping 174 lines ...
I0916 10:03:48.187] (Bpod/valid-pod patched
I0916 10:03:48.285] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0916 10:03:48.368] (Bpod/valid-pod patched
I0916 10:03:48.470] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0916 10:03:48.656] (Bpod/valid-pod patched
I0916 10:03:48.763] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0916 10:03:48.949] (B+++ [0916 10:03:48] "kubectl patch with resourceVersion 501" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0916 10:03:49.218] pod "valid-pod" deleted
I0916 10:03:49.230] pod/valid-pod replaced
I0916 10:03:49.334] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0916 10:03:49.512] (BSuccessful
I0916 10:03:49.513] message:error: --grace-period must have --force specified
I0916 10:03:49.513] has:\-\-grace-period must have \-\-force specified
I0916 10:03:49.674] Successful
I0916 10:03:49.674] message:error: --timeout must have --force specified
I0916 10:03:49.675] has:\-\-timeout must have \-\-force specified
W0916 10:03:49.835] W0916 10:03:49.834427   53026 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0916 10:03:49.936] node/node-v1-test created
I0916 10:03:50.011] node/node-v1-test replaced
I0916 10:03:50.116] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0916 10:03:50.200] (Bnode "node-v1-test" deleted
I0916 10:03:50.315] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0916 10:03:50.612] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 29 lines ...
I0916 10:03:52.513] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 10:03:52.604] (Bpod "valid-pod" force deleted
W0916 10:03:52.704] Edit cancelled, no changes made.
W0916 10:03:52.705] Edit cancelled, no changes made.
W0916 10:03:52.705] Edit cancelled, no changes made.
W0916 10:03:52.705] Edit cancelled, no changes made.
W0916 10:03:52.705] error: 'name' already has a value (valid-pod), and --overwrite is false
W0916 10:03:52.706] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 10:03:52.806] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:03:52.807] (B+++ [0916 10:03:52] Creating namespace namespace-1568628232-23312
I0916 10:03:52.809] namespace/namespace-1568628232-23312 created
I0916 10:03:52.893] Context "test" modified.
I0916 10:03:52.998] core.sh:610: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 79 lines ...
I0916 10:04:00.119] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0916 10:04:00.121] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:04:00.124] +++ command: run_kubectl_create_error_tests
I0916 10:04:00.135] +++ [0916 10:04:00] Creating namespace namespace-1568628240-1398
I0916 10:04:00.210] namespace/namespace-1568628240-1398 created
I0916 10:04:00.283] Context "test" modified.
I0916 10:04:00.289] +++ [0916 10:04:00] Testing kubectl create with error
W0916 10:04:00.390] Error: must specify one of -f and -k
W0916 10:04:00.391] 
W0916 10:04:00.391] Create a resource from a file or from stdin.
W0916 10:04:00.392] 
W0916 10:04:00.392]  JSON and YAML formats are accepted.
W0916 10:04:00.392] 
W0916 10:04:00.392] Examples:
... skipping 41 lines ...
W0916 10:04:00.401] 
W0916 10:04:00.401] Usage:
W0916 10:04:00.401]   kubectl create -f FILENAME [options]
W0916 10:04:00.401] 
W0916 10:04:00.401] Use "kubectl <command> --help" for more information about a given command.
W0916 10:04:00.401] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0916 10:04:00.529] +++ [0916 10:04:00] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0916 10:04:00.630] kubectl convert is DEPRECATED and will be removed in a future version.
W0916 10:04:00.631] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0916 10:04:00.733] +++ exit code: 0
I0916 10:04:00.766] Recording: run_kubectl_apply_tests
I0916 10:04:00.766] Running command: run_kubectl_apply_tests
I0916 10:04:00.790] 
... skipping 17 lines ...
I0916 10:04:02.463] (Bpod "test-pod" deleted
I0916 10:04:02.690] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0916 10:04:03.048] I0916 10:04:03.048060   49471 client.go:361] parsed scheme: "endpoint"
W0916 10:04:03.049] I0916 10:04:03.048914   49471 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0916 10:04:03.055] I0916 10:04:03.054321   49471 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0916 10:04:03.155] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0916 10:04:03.256] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0916 10:04:03.357] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0916 10:04:03.357] +++ exit code: 0
I0916 10:04:03.357] Recording: run_kubectl_run_tests
I0916 10:04:03.357] Running command: run_kubectl_run_tests
I0916 10:04:03.377] 
I0916 10:04:03.380] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 94 lines ...
I0916 10:04:06.237] Context "test" modified.
I0916 10:04:06.243] +++ [0916 10:04:06] Testing kubectl create filter
I0916 10:04:06.337] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:04:06.551] (Bpod/selector-test-pod created
I0916 10:04:06.654] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0916 10:04:06.746] (BSuccessful
I0916 10:04:06.746] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0916 10:04:06.747] has:pods "selector-test-pod-dont-apply" not found
I0916 10:04:06.824] pod "selector-test-pod" deleted
I0916 10:04:06.843] +++ exit code: 0
I0916 10:04:06.876] Recording: run_kubectl_apply_deployments_tests
I0916 10:04:06.876] Running command: run_kubectl_apply_deployments_tests
I0916 10:04:06.899] 
... skipping 31 lines ...
W0916 10:04:09.334] I0916 10:04:09.236614   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568628246-22260", Name:"nginx", UID:"56722c69-11da-47fc-8f4e-f26a4c09b87e", APIVersion:"apps/v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0916 10:04:09.335] I0916 10:04:09.242384   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628246-22260", Name:"nginx-8484dd655", UID:"9953a482-3719-492b-b707-5b1564bf222f", APIVersion:"apps/v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-r2kfj
W0916 10:04:09.335] I0916 10:04:09.244959   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628246-22260", Name:"nginx-8484dd655", UID:"9953a482-3719-492b-b707-5b1564bf222f", APIVersion:"apps/v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-z68pd
W0916 10:04:09.336] I0916 10:04:09.247018   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628246-22260", Name:"nginx-8484dd655", UID:"9953a482-3719-492b-b707-5b1564bf222f", APIVersion:"apps/v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-76lmh
I0916 10:04:09.436] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0916 10:04:13.551] (BSuccessful
I0916 10:04:13.551] message:Error from server (Conflict): error when applying patch:
I0916 10:04:13.552] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568628246-22260\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0916 10:04:13.552] to:
I0916 10:04:13.552] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0916 10:04:13.552] Name: "nginx", Namespace: "namespace-1568628246-22260"
I0916 10:04:13.554] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568628246-22260\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-16T10:04:09Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568628246-22260" "resourceVersion":"597" "selfLink":"/apis/apps/v1/namespaces/namespace-1568628246-22260/deployments/nginx" "uid":"56722c69-11da-47fc-8f4e-f26a4c09b87e"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-16T10:04:09Z" "lastUpdateTime":"2019-09-16T10:04:09Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-16T10:04:09Z" "lastUpdateTime":"2019-09-16T10:04:09Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0916 10:04:13.555] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0916 10:04:13.555] has:Error from server (Conflict)
W0916 10:04:14.522] I0916 10:04:14.521396   53026 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568628237-11587
I0916 10:04:18.833] deployment.apps/nginx configured
I0916 10:04:18.929] Successful
I0916 10:04:18.929] message:        "name": "nginx2"
I0916 10:04:18.929]           "name": "nginx2"
I0916 10:04:18.929] has:"name": "nginx2"
W0916 10:04:19.030] I0916 10:04:18.838177   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568628246-22260", Name:"nginx", UID:"88d52be3-e5cb-471f-803d-57fea3aa02b8", APIVersion:"apps/v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W0916 10:04:19.030] I0916 10:04:18.842665   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628246-22260", Name:"nginx-668b6c7744", UID:"27b653e9-8e1d-4684-95d5-da0f5a0a6064", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-dwg7d
W0916 10:04:19.031] I0916 10:04:18.845907   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628246-22260", Name:"nginx-668b6c7744", UID:"27b653e9-8e1d-4684-95d5-da0f5a0a6064", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-8s6tc
W0916 10:04:19.031] I0916 10:04:18.847001   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628246-22260", Name:"nginx-668b6c7744", UID:"27b653e9-8e1d-4684-95d5-da0f5a0a6064", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-f79mj
W0916 10:04:23.182] E0916 10:04:23.180641   53026 replica_set.go:450] Sync "namespace-1568628246-22260/nginx-668b6c7744" failed with Operation cannot be fulfilled on replicasets.apps "nginx-668b6c7744": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1568628246-22260/nginx-668b6c7744, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 27b653e9-8e1d-4684-95d5-da0f5a0a6064, UID in object meta: 
W0916 10:04:24.155] I0916 10:04:24.154902   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568628246-22260", Name:"nginx", UID:"01e2c35d-342f-4c14-a041-7c9040bc0e6e", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W0916 10:04:24.161] I0916 10:04:24.160370   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628246-22260", Name:"nginx-668b6c7744", UID:"9a651295-209f-4ccc-a136-1947e2517ff4", APIVersion:"apps/v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-7vfw6
W0916 10:04:24.169] I0916 10:04:24.169161   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628246-22260", Name:"nginx-668b6c7744", UID:"9a651295-209f-4ccc-a136-1947e2517ff4", APIVersion:"apps/v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-lq75t
W0916 10:04:24.172] I0916 10:04:24.172142   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628246-22260", Name:"nginx-668b6c7744", UID:"9a651295-209f-4ccc-a136-1947e2517ff4", APIVersion:"apps/v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-nnhtp
I0916 10:04:24.273] Successful
I0916 10:04:24.274] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 132 lines ...
I0916 10:04:26.491] +++ [0916 10:04:26] Creating namespace namespace-1568628266-29930
I0916 10:04:26.565] namespace/namespace-1568628266-29930 created
I0916 10:04:26.639] Context "test" modified.
I0916 10:04:26.645] +++ [0916 10:04:26] Testing kubectl get
I0916 10:04:26.738] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:04:26.824] (BSuccessful
I0916 10:04:26.824] message:Error from server (NotFound): pods "abc" not found
I0916 10:04:26.825] has:pods "abc" not found
I0916 10:04:26.913] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:04:26.999] (BSuccessful
I0916 10:04:27.000] message:Error from server (NotFound): pods "abc" not found
I0916 10:04:27.000] has:pods "abc" not found
I0916 10:04:27.088] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:04:27.173] (BSuccessful
I0916 10:04:27.174] message:{
I0916 10:04:27.174]     "apiVersion": "v1",
I0916 10:04:27.174]     "items": [],
... skipping 23 lines ...
I0916 10:04:27.512] has not:No resources found
I0916 10:04:27.596] Successful
I0916 10:04:27.597] message:NAME
I0916 10:04:27.597] has not:No resources found
I0916 10:04:27.685] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:04:27.784] (BSuccessful
I0916 10:04:27.784] message:error: the server doesn't have a resource type "foobar"
I0916 10:04:27.784] has not:No resources found
I0916 10:04:27.872] Successful
I0916 10:04:27.873] message:No resources found in namespace-1568628266-29930 namespace.
I0916 10:04:27.873] has:No resources found
I0916 10:04:27.961] Successful
I0916 10:04:27.962] message:
I0916 10:04:27.962] has not:No resources found
I0916 10:04:28.049] Successful
I0916 10:04:28.050] message:No resources found in namespace-1568628266-29930 namespace.
I0916 10:04:28.050] has:No resources found
I0916 10:04:28.141] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:04:28.234] (BSuccessful
I0916 10:04:28.235] message:Error from server (NotFound): pods "abc" not found
I0916 10:04:28.235] has:pods "abc" not found
I0916 10:04:28.236] FAIL!
I0916 10:04:28.236] message:Error from server (NotFound): pods "abc" not found
I0916 10:04:28.237] has not:List
I0916 10:04:28.237] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0916 10:04:28.354] Successful
I0916 10:04:28.354] message:I0916 10:04:28.303548   62989 loader.go:375] Config loaded from file:  /tmp/tmp.7jO9JdA8X2/.kube/config
I0916 10:04:28.354] I0916 10:04:28.305430   62989 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0916 10:04:28.355] I0916 10:04:28.327505   62989 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0916 10:04:33.983] Successful
I0916 10:04:33.984] message:NAME    DATA   AGE
I0916 10:04:33.984] one     0      0s
I0916 10:04:33.984] three   0      0s
I0916 10:04:33.984] two     0      0s
I0916 10:04:33.984] STATUS    REASON          MESSAGE
I0916 10:04:33.984] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:04:33.985] has not:watch is only supported on individual resources
I0916 10:04:35.088] Successful
I0916 10:04:35.088] message:STATUS    REASON          MESSAGE
I0916 10:04:35.088] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:04:35.089] has not:watch is only supported on individual resources
I0916 10:04:35.093] +++ [0916 10:04:35] Creating namespace namespace-1568628275-17295
I0916 10:04:35.168] namespace/namespace-1568628275-17295 created
I0916 10:04:35.243] Context "test" modified.
I0916 10:04:35.335] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:04:35.500] (Bpod/valid-pod created
... skipping 56 lines ...
I0916 10:04:35.591] }
I0916 10:04:35.691] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 10:04:35.947] (B<no value>Successful
I0916 10:04:35.947] message:valid-pod:
I0916 10:04:35.947] has:valid-pod:
I0916 10:04:36.033] Successful
I0916 10:04:36.034] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0916 10:04:36.034] 	template was:
I0916 10:04:36.034] 		{.missing}
I0916 10:04:36.034] 	object given to jsonpath engine was:
I0916 10:04:36.035] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-16T10:04:35Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568628275-17295", "resourceVersion":"696", "selfLink":"/api/v1/namespaces/namespace-1568628275-17295/pods/valid-pod", "uid":"3c7fa1c4-a920-42b6-a1d0-e0fb4e2ec165"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0916 10:04:36.035] has:missing is not found
I0916 10:04:36.116] Successful
I0916 10:04:36.117] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0916 10:04:36.117] 	template was:
I0916 10:04:36.117] 		{{.missing}}
I0916 10:04:36.117] 	raw data was:
I0916 10:04:36.118] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-16T10:04:35Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568628275-17295","resourceVersion":"696","selfLink":"/api/v1/namespaces/namespace-1568628275-17295/pods/valid-pod","uid":"3c7fa1c4-a920-42b6-a1d0-e0fb4e2ec165"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0916 10:04:36.118] 	object given to template engine was:
I0916 10:04:36.119] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-16T10:04:35Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568628275-17295 resourceVersion:696 selfLink:/api/v1/namespaces/namespace-1568628275-17295/pods/valid-pod uid:3c7fa1c4-a920-42b6-a1d0-e0fb4e2ec165] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0916 10:04:36.119] has:map has no entry for key "missing"
W0916 10:04:36.220] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0916 10:04:37.202] Successful
I0916 10:04:37.202] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 10:04:37.202] valid-pod   0/1     Pending   0          1s
I0916 10:04:37.202] STATUS      REASON          MESSAGE
I0916 10:04:37.203] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:04:37.203] has:STATUS
I0916 10:04:37.205] Successful
I0916 10:04:37.205] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 10:04:37.205] valid-pod   0/1     Pending   0          1s
I0916 10:04:37.205] STATUS      REASON          MESSAGE
I0916 10:04:37.206] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:04:37.206] has:valid-pod
I0916 10:04:38.294] Successful
I0916 10:04:38.295] message:pod/valid-pod
I0916 10:04:38.295] has not:STATUS
I0916 10:04:38.297] Successful
I0916 10:04:38.297] message:pod/valid-pod
... skipping 72 lines ...
I0916 10:04:39.391] status:
I0916 10:04:39.391]   phase: Pending
I0916 10:04:39.391]   qosClass: Guaranteed
I0916 10:04:39.392] ---
I0916 10:04:39.392] has:name: valid-pod
I0916 10:04:39.475] Successful
I0916 10:04:39.475] message:Error from server (NotFound): pods "invalid-pod" not found
I0916 10:04:39.476] has:"invalid-pod" not found
I0916 10:04:39.555] pod "valid-pod" deleted
I0916 10:04:39.648] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:04:39.814] (Bpod/redis-master created
I0916 10:04:39.819] pod/valid-pod created
I0916 10:04:39.930] Successful
... skipping 35 lines ...
I0916 10:04:41.074] +++ command: run_kubectl_exec_pod_tests
I0916 10:04:41.085] +++ [0916 10:04:41] Creating namespace namespace-1568628281-23522
I0916 10:04:41.161] namespace/namespace-1568628281-23522 created
I0916 10:04:41.233] Context "test" modified.
I0916 10:04:41.240] +++ [0916 10:04:41] Testing kubectl exec POD COMMAND
I0916 10:04:41.330] Successful
I0916 10:04:41.331] message:Error from server (NotFound): pods "abc" not found
I0916 10:04:41.331] has:pods "abc" not found
I0916 10:04:41.489] pod/test-pod created
I0916 10:04:41.590] Successful
I0916 10:04:41.591] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 10:04:41.591] has not:pods "test-pod" not found
I0916 10:04:41.592] Successful
I0916 10:04:41.592] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 10:04:41.593] has not:pod or type/name must be specified
I0916 10:04:41.670] pod "test-pod" deleted
I0916 10:04:41.689] +++ exit code: 0
I0916 10:04:41.724] Recording: run_kubectl_exec_resource_name_tests
I0916 10:04:41.725] Running command: run_kubectl_exec_resource_name_tests
I0916 10:04:41.746] 
... skipping 2 lines ...
I0916 10:04:41.755] +++ command: run_kubectl_exec_resource_name_tests
I0916 10:04:41.765] +++ [0916 10:04:41] Creating namespace namespace-1568628281-15660
I0916 10:04:41.841] namespace/namespace-1568628281-15660 created
I0916 10:04:41.915] Context "test" modified.
I0916 10:04:41.921] +++ [0916 10:04:41] Testing kubectl exec TYPE/NAME COMMAND
I0916 10:04:42.020] Successful
I0916 10:04:42.021] message:error: the server doesn't have a resource type "foo"
I0916 10:04:42.021] has:error:
I0916 10:04:42.108] Successful
I0916 10:04:42.109] message:Error from server (NotFound): deployments.apps "bar" not found
I0916 10:04:42.109] has:"bar" not found
I0916 10:04:42.263] pod/test-pod created
I0916 10:04:42.420] replicaset.apps/frontend created
W0916 10:04:42.521] I0916 10:04:42.424152   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628281-15660", Name:"frontend", UID:"13b2c54e-1866-4998-9442-23f346e2c346", APIVersion:"apps/v1", ResourceVersion:"749", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tzk87
W0916 10:04:42.522] I0916 10:04:42.428280   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628281-15660", Name:"frontend", UID:"13b2c54e-1866-4998-9442-23f346e2c346", APIVersion:"apps/v1", ResourceVersion:"749", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mjbcj
W0916 10:04:42.522] I0916 10:04:42.428536   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628281-15660", Name:"frontend", UID:"13b2c54e-1866-4998-9442-23f346e2c346", APIVersion:"apps/v1", ResourceVersion:"749", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7d9gh
I0916 10:04:42.623] configmap/test-set-env-config created
I0916 10:04:42.687] Successful
I0916 10:04:42.687] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0916 10:04:42.688] has:not implemented
I0916 10:04:42.790] Successful
I0916 10:04:42.790] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 10:04:42.790] has not:not found
I0916 10:04:42.792] Successful
I0916 10:04:42.792] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 10:04:42.792] has not:pod or type/name must be specified
I0916 10:04:42.907] Successful
I0916 10:04:42.908] message:Error from server (BadRequest): pod frontend-7d9gh does not have a host assigned
I0916 10:04:42.908] has not:not found
I0916 10:04:42.910] Successful
I0916 10:04:42.910] message:Error from server (BadRequest): pod frontend-7d9gh does not have a host assigned
I0916 10:04:42.910] has not:pod or type/name must be specified
I0916 10:04:43.013] pod "test-pod" deleted
I0916 10:04:43.103] replicaset.apps "frontend" deleted
I0916 10:04:43.200] configmap "test-set-env-config" deleted
I0916 10:04:43.219] +++ exit code: 0
I0916 10:04:43.253] Recording: run_create_secret_tests
I0916 10:04:43.253] Running command: run_create_secret_tests
I0916 10:04:43.281] 
I0916 10:04:43.283] +++ Running case: test-cmd.run_create_secret_tests 
I0916 10:04:43.286] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:04:43.288] +++ command: run_create_secret_tests
I0916 10:04:43.390] Successful
I0916 10:04:43.391] message:Error from server (NotFound): secrets "mysecret" not found
I0916 10:04:43.391] has:secrets "mysecret" not found
I0916 10:04:43.560] Successful
I0916 10:04:43.560] message:Error from server (NotFound): secrets "mysecret" not found
I0916 10:04:43.560] has:secrets "mysecret" not found
I0916 10:04:43.562] Successful
I0916 10:04:43.563] message:user-specified
I0916 10:04:43.563] has:user-specified
I0916 10:04:43.639] Successful
I0916 10:04:43.734] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"0287e5c9-5c8b-44df-abb3-54e07fe6c029","resourceVersion":"770","creationTimestamp":"2019-09-16T10:04:43Z"}}
... skipping 2 lines ...
I0916 10:04:43.922] has:uid
I0916 10:04:44.005] Successful
I0916 10:04:44.005] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"0287e5c9-5c8b-44df-abb3-54e07fe6c029","resourceVersion":"771","creationTimestamp":"2019-09-16T10:04:43Z"},"data":{"key1":"config1"}}
I0916 10:04:44.005] has:config1
I0916 10:04:44.081] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"0287e5c9-5c8b-44df-abb3-54e07fe6c029"}}
I0916 10:04:44.183] Successful
I0916 10:04:44.184] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0916 10:04:44.184] has:configmaps "tester-update-cm" not found
I0916 10:04:44.198] +++ exit code: 0
I0916 10:04:44.235] Recording: run_kubectl_create_kustomization_directory_tests
I0916 10:04:44.236] Running command: run_kubectl_create_kustomization_directory_tests
I0916 10:04:44.261] 
I0916 10:04:44.265] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I0916 10:04:46.990] valid-pod   0/1     Pending   0          0s
I0916 10:04:46.990] has:valid-pod
I0916 10:04:48.074] Successful
I0916 10:04:48.075] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 10:04:48.075] valid-pod   0/1     Pending   0          1s
I0916 10:04:48.075] STATUS      REASON          MESSAGE
I0916 10:04:48.076] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:04:48.076] has:Timeout exceeded while reading body
I0916 10:04:48.157] Successful
I0916 10:04:48.157] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 10:04:48.157] valid-pod   0/1     Pending   0          2s
I0916 10:04:48.158] has:valid-pod
I0916 10:04:48.230] Successful
I0916 10:04:48.230] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0916 10:04:48.230] has:Invalid timeout value
I0916 10:04:48.318] pod "valid-pod" deleted
I0916 10:04:48.338] +++ exit code: 0
I0916 10:04:48.371] Recording: run_crd_tests
I0916 10:04:48.372] Running command: run_crd_tests
I0916 10:04:48.392] 
... skipping 158 lines ...
I0916 10:04:53.161] foo.company.com/test patched
I0916 10:04:53.260] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0916 10:04:53.352] (Bfoo.company.com/test patched
I0916 10:04:53.462] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0916 10:04:53.564] (Bfoo.company.com/test patched
I0916 10:04:53.674] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0916 10:04:53.856] (B+++ [0916 10:04:53] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0916 10:04:53.927] {
I0916 10:04:53.927]     "apiVersion": "company.com/v1",
I0916 10:04:53.927]     "kind": "Foo",
I0916 10:04:53.927]     "metadata": {
I0916 10:04:53.927]         "annotations": {
I0916 10:04:53.928]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 190 lines ...
I0916 10:05:15.987] (Bnamespace/non-native-resources created
I0916 10:05:16.154] bar.company.com/test created
I0916 10:05:16.253] crd.sh:455: Successful get bars {{len .items}}: 1
I0916 10:05:16.333] (Bnamespace "non-native-resources" deleted
I0916 10:05:21.578] crd.sh:458: Successful get bars {{len .items}}: 0
I0916 10:05:21.747] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0916 10:05:21.848] Error from server (NotFound): namespaces "non-native-resources" not found
I0916 10:05:21.948] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0916 10:05:21.949] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0916 10:05:22.054] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0916 10:05:22.081] +++ exit code: 0
I0916 10:05:22.116] Recording: run_cmd_with_img_tests
I0916 10:05:22.117] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0916 10:05:22.430] I0916 10:05:22.429770   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628322-22203", Name:"test1-6cdffdb5b8", UID:"a516eba1-2dd8-4734-973b-c6db56d317d8", APIVersion:"apps/v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-9fjfb
I0916 10:05:22.532] Successful
I0916 10:05:22.533] message:deployment.apps/test1 created
I0916 10:05:22.533] has:deployment.apps/test1 created
I0916 10:05:22.533] deployment.apps "test1" deleted
I0916 10:05:22.602] Successful
I0916 10:05:22.603] message:error: Invalid image name "InvalidImageName": invalid reference format
I0916 10:05:22.604] has:error: Invalid image name "InvalidImageName": invalid reference format
I0916 10:05:22.616] +++ exit code: 0
I0916 10:05:22.655] +++ [0916 10:05:22] Testing recursive resources
I0916 10:05:22.660] +++ [0916 10:05:22] Creating namespace namespace-1568628322-7190
I0916 10:05:22.746] namespace/namespace-1568628322-7190 created
I0916 10:05:22.839] Context "test" modified.
W0916 10:05:22.939] W0916 10:05:22.755241   49471 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 10:05:22.940] E0916 10:05:22.756624   53026 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:22.940] W0916 10:05:22.859821   49471 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 10:05:22.941] E0916 10:05:22.861203   53026 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:22.955] W0916 10:05:22.954605   49471 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 10:05:22.956] E0916 10:05:22.956144   53026 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:05:23.058] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:23.297] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:23.299] (BSuccessful
I0916 10:05:23.299] message:pod/busybox0 created
I0916 10:05:23.299] pod/busybox1 created
I0916 10:05:23.300] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 10:05:23.300] has:error validating data: kind not set
I0916 10:05:23.399] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:23.596] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0916 10:05:23.598] (BSuccessful
I0916 10:05:23.599] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:23.599] has:Object 'Kind' is missing
I0916 10:05:23.698] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:23.981] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0916 10:05:23.984] (BSuccessful
I0916 10:05:23.984] message:pod/busybox0 replaced
I0916 10:05:23.984] pod/busybox1 replaced
I0916 10:05:23.985] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 10:05:23.985] has:error validating data: kind not set
I0916 10:05:24.077] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:24.183] (BSuccessful
I0916 10:05:24.184] message:Name:         busybox0
I0916 10:05:24.184] Namespace:    namespace-1568628322-7190
I0916 10:05:24.184] Priority:     0
I0916 10:05:24.184] Node:         <none>
... skipping 159 lines ...
I0916 10:05:24.204] has:Object 'Kind' is missing
I0916 10:05:24.291] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:24.506] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0916 10:05:24.508] (BSuccessful
I0916 10:05:24.509] message:pod/busybox0 annotated
I0916 10:05:24.509] pod/busybox1 annotated
I0916 10:05:24.509] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:24.509] has:Object 'Kind' is missing
I0916 10:05:24.608] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:24.893] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0916 10:05:24.895] (BSuccessful
I0916 10:05:24.895] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0916 10:05:24.895] pod/busybox0 configured
I0916 10:05:24.896] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0916 10:05:24.896] pod/busybox1 configured
I0916 10:05:24.896] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 10:05:24.896] has:error validating data: kind not set
I0916 10:05:24.993] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:25.158] (Bdeployment.apps/nginx created
W0916 10:05:25.259] W0916 10:05:23.062243   49471 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 10:05:25.259] E0916 10:05:23.063927   53026 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:25.260] E0916 10:05:23.758248   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:25.260] E0916 10:05:23.862567   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:25.260] E0916 10:05:23.957512   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:25.261] E0916 10:05:24.065378   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:25.261] E0916 10:05:24.759681   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:25.261] E0916 10:05:24.864753   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:25.261] E0916 10:05:24.959124   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:25.262] E0916 10:05:25.066932   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:25.262] I0916 10:05:25.162647   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568628322-7190", Name:"nginx", UID:"d6d8b6a3-06cb-423c-8e00-3e10c084b454", APIVersion:"apps/v1", ResourceVersion:"951", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W0916 10:05:25.262] I0916 10:05:25.167267   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628322-7190", Name:"nginx-f87d999f7", UID:"c68fde88-0adc-4894-8762-a563775a141e", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-nj5jl
W0916 10:05:25.263] I0916 10:05:25.170315   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628322-7190", Name:"nginx-f87d999f7", UID:"c68fde88-0adc-4894-8762-a563775a141e", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-w5p59
W0916 10:05:25.263] I0916 10:05:25.171282   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628322-7190", Name:"nginx-f87d999f7", UID:"c68fde88-0adc-4894-8762-a563775a141e", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-gc24z
I0916 10:05:25.364] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 10:05:25.364] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 44 lines ...
I0916 10:05:25.624] deployment.apps "nginx" deleted
I0916 10:05:25.723] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:25.897] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:25.900] (BSuccessful
I0916 10:05:25.900] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0916 10:05:25.901] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0916 10:05:25.901] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:25.901] has:Object 'Kind' is missing
I0916 10:05:25.992] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:26.083] (BSuccessful
I0916 10:05:26.084] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:26.085] has:busybox0:busybox1:
I0916 10:05:26.088] Successful
I0916 10:05:26.088] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:26.088] has:Object 'Kind' is missing
I0916 10:05:26.193] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:26.292] (Bpod/busybox0 labeled
I0916 10:05:26.292] pod/busybox1 labeled
I0916 10:05:26.293] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:26.383] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0916 10:05:26.386] (BSuccessful
I0916 10:05:26.387] message:pod/busybox0 labeled
I0916 10:05:26.387] pod/busybox1 labeled
I0916 10:05:26.387] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:26.387] has:Object 'Kind' is missing
I0916 10:05:26.477] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:26.565] (Bpod/busybox0 patched
I0916 10:05:26.566] pod/busybox1 patched
I0916 10:05:26.566] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:26.658] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0916 10:05:26.661] (BSuccessful
I0916 10:05:26.661] message:pod/busybox0 patched
I0916 10:05:26.662] pod/busybox1 patched
I0916 10:05:26.663] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:26.663] has:Object 'Kind' is missing
I0916 10:05:26.765] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:26.957] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:26.959] (BSuccessful
I0916 10:05:26.959] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 10:05:26.959] pod "busybox0" force deleted
I0916 10:05:26.959] pod "busybox1" force deleted
I0916 10:05:26.960] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:05:26.960] has:Object 'Kind' is missing
I0916 10:05:27.048] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:27.214] (Breplicationcontroller/busybox0 created
I0916 10:05:27.218] replicationcontroller/busybox1 created
I0916 10:05:27.304] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:27.380] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:27.451] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 10:05:27.525] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 10:05:27.664] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0916 10:05:27.738] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0916 10:05:27.740] (BSuccessful
I0916 10:05:27.740] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0916 10:05:27.741] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0916 10:05:27.741] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:27.741] has:Object 'Kind' is missing
I0916 10:05:27.808] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0916 10:05:27.878] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0916 10:05:27.959] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:28.028] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 10:05:28.100] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 10:05:28.251] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 10:05:28.331] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 10:05:28.333] (BSuccessful
I0916 10:05:28.333] message:service/busybox0 exposed
I0916 10:05:28.333] service/busybox1 exposed
I0916 10:05:28.334] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:28.334] has:Object 'Kind' is missing
I0916 10:05:28.418] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:28.499] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 10:05:28.579] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 10:05:28.816] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0916 10:05:28.906] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0916 10:05:28.908] (BSuccessful
I0916 10:05:28.908] message:replicationcontroller/busybox0 scaled
I0916 10:05:28.908] replicationcontroller/busybox1 scaled
I0916 10:05:28.909] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:28.909] has:Object 'Kind' is missing
I0916 10:05:29.004] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:29.205] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:29.208] (BSuccessful
I0916 10:05:29.209] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 10:05:29.209] replicationcontroller "busybox0" force deleted
I0916 10:05:29.209] replicationcontroller "busybox1" force deleted
I0916 10:05:29.210] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:29.210] has:Object 'Kind' is missing
I0916 10:05:29.312] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:29.464] (Bdeployment.apps/nginx1-deployment created
I0916 10:05:29.469] deployment.apps/nginx0-deployment created
W0916 10:05:29.570] kubectl convert is DEPRECATED and will be removed in a future version.
W0916 10:05:29.570] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0916 10:05:29.570] E0916 10:05:25.761370   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.570] E0916 10:05:25.866940   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.571] E0916 10:05:25.960598   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.571] E0916 10:05:26.068962   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.571] I0916 10:05:26.472997   53026 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0916 10:05:29.571] E0916 10:05:26.763437   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.572] E0916 10:05:26.868912   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.572] E0916 10:05:26.962034   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.572] E0916 10:05:27.070385   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.572] I0916 10:05:27.217534   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628322-7190", Name:"busybox0", UID:"4978efab-033b-4024-a5b1-73971b726d49", APIVersion:"v1", ResourceVersion:"982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qvnjm
W0916 10:05:29.573] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 10:05:29.573] I0916 10:05:27.220803   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628322-7190", Name:"busybox1", UID:"102a9f18-dce2-44a9-a2e8-35e9b255a06c", APIVersion:"v1", ResourceVersion:"984", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-lh2bh
W0916 10:05:29.573] E0916 10:05:27.764511   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.573] E0916 10:05:27.870121   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.574] E0916 10:05:27.963012   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.574] E0916 10:05:28.071603   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.574] I0916 10:05:28.687085   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628322-7190", Name:"busybox0", UID:"4978efab-033b-4024-a5b1-73971b726d49", APIVersion:"v1", ResourceVersion:"1003", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-f469w
W0916 10:05:29.574] I0916 10:05:28.699218   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628322-7190", Name:"busybox1", UID:"102a9f18-dce2-44a9-a2e8-35e9b255a06c", APIVersion:"v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-wwsjm
W0916 10:05:29.575] E0916 10:05:28.766881   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.575] E0916 10:05:28.871759   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.575] E0916 10:05:28.965329   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.575] E0916 10:05:29.073228   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.576] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 10:05:29.576] I0916 10:05:29.469134   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568628322-7190", Name:"nginx1-deployment", UID:"7ac8faa6-124c-4e39-943d-4e3e612d0b38", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W0916 10:05:29.576] I0916 10:05:29.474043   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628322-7190", Name:"nginx1-deployment-7bdbbfb5cf", UID:"774943ec-363a-4eed-a432-2519a44afa92", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-nb74w
W0916 10:05:29.576] I0916 10:05:29.480821   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628322-7190", Name:"nginx1-deployment-7bdbbfb5cf", UID:"774943ec-363a-4eed-a432-2519a44afa92", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-4pkxp
W0916 10:05:29.577] I0916 10:05:29.480997   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568628322-7190", Name:"nginx0-deployment", UID:"b3c1a89f-d5c9-46e5-a14b-54846aa69e32", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W0916 10:05:29.577] I0916 10:05:29.485857   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628322-7190", Name:"nginx0-deployment-57c6bff7f6", UID:"20efe201-379b-4ca9-b8ad-ed8e3c0bc857", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-xz8jd
W0916 10:05:29.578] I0916 10:05:29.488885   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628322-7190", Name:"nginx0-deployment-57c6bff7f6", UID:"20efe201-379b-4ca9-b8ad-ed8e3c0bc857", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-f97dz
I0916 10:05:29.678] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0916 10:05:29.679] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0916 10:05:29.874] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0916 10:05:29.877] (BSuccessful
I0916 10:05:29.878] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0916 10:05:29.878] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0916 10:05:29.878] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:05:29.878] has:Object 'Kind' is missing
W0916 10:05:29.979] E0916 10:05:29.768047   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.979] E0916 10:05:29.873455   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:29.980] E0916 10:05:29.966680   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:30.075] E0916 10:05:30.075021   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:05:30.176] deployment.apps/nginx1-deployment paused
I0916 10:05:30.176] deployment.apps/nginx0-deployment paused
I0916 10:05:30.177] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0916 10:05:30.177] (BSuccessful
I0916 10:05:30.177] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:05:30.178] has:Object 'Kind' is missing
... skipping 9 lines ...
I0916 10:05:30.411] 1         <none>
I0916 10:05:30.412] 
I0916 10:05:30.412] deployment.apps/nginx0-deployment 
I0916 10:05:30.412] REVISION  CHANGE-CAUSE
I0916 10:05:30.412] 1         <none>
I0916 10:05:30.412] 
I0916 10:05:30.413] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:05:30.413] has:nginx0-deployment
I0916 10:05:30.413] Successful
I0916 10:05:30.413] message:deployment.apps/nginx1-deployment 
I0916 10:05:30.414] REVISION  CHANGE-CAUSE
I0916 10:05:30.414] 1         <none>
I0916 10:05:30.414] 
I0916 10:05:30.414] deployment.apps/nginx0-deployment 
I0916 10:05:30.414] REVISION  CHANGE-CAUSE
I0916 10:05:30.414] 1         <none>
I0916 10:05:30.414] 
I0916 10:05:30.415] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:05:30.415] has:nginx1-deployment
I0916 10:05:30.416] Successful
I0916 10:05:30.416] message:deployment.apps/nginx1-deployment 
I0916 10:05:30.416] REVISION  CHANGE-CAUSE
I0916 10:05:30.417] 1         <none>
I0916 10:05:30.417] 
I0916 10:05:30.417] deployment.apps/nginx0-deployment 
I0916 10:05:30.417] REVISION  CHANGE-CAUSE
I0916 10:05:30.417] 1         <none>
I0916 10:05:30.417] 
I0916 10:05:30.417] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:05:30.417] has:Object 'Kind' is missing
I0916 10:05:30.498] deployment.apps "nginx1-deployment" force deleted
I0916 10:05:30.504] deployment.apps "nginx0-deployment" force deleted
W0916 10:05:30.605] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 10:05:30.605] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0916 10:05:30.770] E0916 10:05:30.769591   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:30.875] E0916 10:05:30.874925   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:30.969] E0916 10:05:30.968361   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:31.077] E0916 10:05:31.076642   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:05:31.604] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:31.775] (Breplicationcontroller/busybox0 created
I0916 10:05:31.782] replicationcontroller/busybox1 created
I0916 10:05:31.886] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:05:31.984] (BSuccessful
I0916 10:05:31.984] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0916 10:05:31.986] message:no rollbacker has been implemented for "ReplicationController"
I0916 10:05:31.986] no rollbacker has been implemented for "ReplicationController"
I0916 10:05:31.987] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:31.987] has:Object 'Kind' is missing
I0916 10:05:32.081] Successful
I0916 10:05:32.081] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:32.081] error: replicationcontrollers "busybox0" pausing is not supported
I0916 10:05:32.082] error: replicationcontrollers "busybox1" pausing is not supported
I0916 10:05:32.082] has:Object 'Kind' is missing
I0916 10:05:32.083] Successful
I0916 10:05:32.084] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:32.084] error: replicationcontrollers "busybox0" pausing is not supported
I0916 10:05:32.084] error: replicationcontrollers "busybox1" pausing is not supported
I0916 10:05:32.084] has:replicationcontrollers "busybox0" pausing is not supported
I0916 10:05:32.086] Successful
I0916 10:05:32.086] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:32.087] error: replicationcontrollers "busybox0" pausing is not supported
I0916 10:05:32.087] error: replicationcontrollers "busybox1" pausing is not supported
I0916 10:05:32.087] has:replicationcontrollers "busybox1" pausing is not supported
I0916 10:05:32.183] Successful
I0916 10:05:32.184] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:32.184] error: replicationcontrollers "busybox0" resuming is not supported
I0916 10:05:32.184] error: replicationcontrollers "busybox1" resuming is not supported
I0916 10:05:32.184] has:Object 'Kind' is missing
I0916 10:05:32.186] Successful
I0916 10:05:32.186] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:32.187] error: replicationcontrollers "busybox0" resuming is not supported
I0916 10:05:32.187] error: replicationcontrollers "busybox1" resuming is not supported
I0916 10:05:32.187] has:replicationcontrollers "busybox0" resuming is not supported
I0916 10:05:32.188] Successful
I0916 10:05:32.189] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:05:32.189] error: replicationcontrollers "busybox0" resuming is not supported
I0916 10:05:32.190] error: replicationcontrollers "busybox1" resuming is not supported
I0916 10:05:32.190] has:replicationcontrollers "busybox0" resuming is not supported
I0916 10:05:32.269] replicationcontroller "busybox0" force deleted
I0916 10:05:32.276] replicationcontroller "busybox1" force deleted
W0916 10:05:32.378] E0916 10:05:31.771020   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:32.378] I0916 10:05:31.779793   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628322-7190", Name:"busybox0", UID:"678244df-1763-478b-9e67-7dfa07d288f4", APIVersion:"v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-n9sbm
W0916 10:05:32.379] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 10:05:32.379] I0916 10:05:31.785063   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628322-7190", Name:"busybox1", UID:"ab167b05-6f1c-4f1d-8e14-d97cc2414420", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-8hgdt
W0916 10:05:32.379] E0916 10:05:31.876396   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:32.380] E0916 10:05:31.969732   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:32.380] E0916 10:05:32.077965   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:32.380] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 10:05:32.381] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0916 10:05:32.773] E0916 10:05:32.772602   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:32.879] E0916 10:05:32.878076   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:32.972] E0916 10:05:32.971360   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:33.080] E0916 10:05:33.079534   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:05:33.283] Recording: run_namespace_tests
I0916 10:05:33.284] Running command: run_namespace_tests
I0916 10:05:33.309] 
I0916 10:05:33.315] +++ Running case: test-cmd.run_namespace_tests 
I0916 10:05:33.315] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:05:33.318] +++ command: run_namespace_tests
I0916 10:05:33.328] +++ [0916 10:05:33] Testing kubectl(v1:namespaces)
I0916 10:05:33.407] namespace/my-namespace created
I0916 10:05:33.508] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0916 10:05:33.595] (Bnamespace "my-namespace" deleted
W0916 10:05:33.775] E0916 10:05:33.774339   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:33.880] E0916 10:05:33.879354   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:33.973] E0916 10:05:33.972957   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:34.081] E0916 10:05:34.081086   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:34.776] E0916 10:05:34.776054   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:34.882] E0916 10:05:34.881193   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:34.975] E0916 10:05:34.974594   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:35.083] E0916 10:05:35.083159   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:35.778] E0916 10:05:35.777619   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:35.883] E0916 10:05:35.882743   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:35.976] E0916 10:05:35.976138   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:36.085] E0916 10:05:36.084887   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:36.779] E0916 10:05:36.779186   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:36.885] E0916 10:05:36.884558   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:36.978] E0916 10:05:36.977614   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:37.087] E0916 10:05:37.086569   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:37.781] E0916 10:05:37.780699   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:37.886] E0916 10:05:37.885889   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:37.979] E0916 10:05:37.979051   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:38.089] E0916 10:05:38.088452   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:05:38.725] namespace/my-namespace condition met
I0916 10:05:38.814] Successful
I0916 10:05:38.815] message:Error from server (NotFound): namespaces "my-namespace" not found
I0916 10:05:38.815] has: not found
I0916 10:05:38.890] namespace/my-namespace created
I0916 10:05:38.984] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0916 10:05:39.234] (BSuccessful
I0916 10:05:39.235] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0916 10:05:39.235] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0916 10:05:39.240] namespace "namespace-1568628285-28276" deleted
I0916 10:05:39.240] namespace "namespace-1568628286-7041" deleted
I0916 10:05:39.240] namespace "namespace-1568628288-18399" deleted
I0916 10:05:39.240] namespace "namespace-1568628289-12263" deleted
I0916 10:05:39.241] namespace "namespace-1568628322-22203" deleted
I0916 10:05:39.241] namespace "namespace-1568628322-7190" deleted
I0916 10:05:39.241] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0916 10:05:39.241] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0916 10:05:39.241] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0916 10:05:39.242] has:warning: deleting cluster-scoped resources
I0916 10:05:39.242] Successful
I0916 10:05:39.242] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0916 10:05:39.242] namespace "kube-node-lease" deleted
I0916 10:05:39.242] namespace "my-namespace" deleted
I0916 10:05:39.242] namespace "namespace-1568628188-6615" deleted
... skipping 27 lines ...
I0916 10:05:39.247] namespace "namespace-1568628285-28276" deleted
I0916 10:05:39.247] namespace "namespace-1568628286-7041" deleted
I0916 10:05:39.247] namespace "namespace-1568628288-18399" deleted
I0916 10:05:39.248] namespace "namespace-1568628289-12263" deleted
I0916 10:05:39.248] namespace "namespace-1568628322-22203" deleted
I0916 10:05:39.248] namespace "namespace-1568628322-7190" deleted
I0916 10:05:39.248] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0916 10:05:39.248] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0916 10:05:39.249] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0916 10:05:39.249] has:namespace "my-namespace" deleted
I0916 10:05:39.350] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0916 10:05:39.427] (Bnamespace/other created
I0916 10:05:39.527] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0916 10:05:39.619] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:39.793] (Bpod/valid-pod created
I0916 10:05:39.898] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 10:05:39.998] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 10:05:40.078] (BSuccessful
I0916 10:05:40.078] message:error: a resource cannot be retrieved by name across all namespaces
I0916 10:05:40.078] has:a resource cannot be retrieved by name across all namespaces
I0916 10:05:40.170] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 10:05:40.251] (Bpod "valid-pod" force deleted
I0916 10:05:40.353] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:40.431] (Bnamespace "other" deleted
W0916 10:05:40.532] E0916 10:05:38.782217   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.532] E0916 10:05:38.887183   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.532] E0916 10:05:38.980248   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.533] E0916 10:05:39.089545   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.533] I0916 10:05:39.472842   53026 shared_informer.go:197] Waiting for caches to sync for resource quota
W0916 10:05:40.533] I0916 10:05:39.472899   53026 shared_informer.go:204] Caches are synced for resource quota 
W0916 10:05:40.533] E0916 10:05:39.783435   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.534] I0916 10:05:39.882003   53026 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0916 10:05:40.534] I0916 10:05:39.882104   53026 shared_informer.go:204] Caches are synced for garbage collector 
W0916 10:05:40.534] E0916 10:05:39.888868   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.534] E0916 10:05:39.981743   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.535] E0916 10:05:40.091178   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.535] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 10:05:40.785] E0916 10:05:40.785114   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.891] E0916 10:05:40.890368   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:40.984] E0916 10:05:40.983590   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:41.093] E0916 10:05:41.092725   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:41.787] E0916 10:05:41.786638   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:41.892] E0916 10:05:41.891876   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:41.985] E0916 10:05:41.984933   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:42.094] E0916 10:05:42.094168   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:42.588] I0916 10:05:42.587790   53026 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1568628322-7190
W0916 10:05:42.591] I0916 10:05:42.590849   53026 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1568628322-7190
W0916 10:05:42.788] E0916 10:05:42.788275   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:42.894] E0916 10:05:42.893387   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:42.987] E0916 10:05:42.986849   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:43.100] E0916 10:05:43.100215   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:43.789] E0916 10:05:43.789224   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:43.895] E0916 10:05:43.895110   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:43.989] E0916 10:05:43.988694   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:44.102] E0916 10:05:44.101161   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:44.794] E0916 10:05:44.794122   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:44.899] E0916 10:05:44.898245   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:44.990] E0916 10:05:44.989881   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:45.102] E0916 10:05:45.102103   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:05:45.558] +++ exit code: 0
I0916 10:05:45.595] Recording: run_secrets_test
I0916 10:05:45.595] Running command: run_secrets_test
I0916 10:05:45.619] 
I0916 10:05:45.622] +++ Running case: test-cmd.run_secrets_test 
I0916 10:05:45.625] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0916 10:05:47.569] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0916 10:05:47.645] (Bsecret "test-secret" deleted
I0916 10:05:47.731] secret/test-secret created
I0916 10:05:47.828] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0916 10:05:47.917] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0916 10:05:47.997] (Bsecret "test-secret" deleted
W0916 10:05:48.098] E0916 10:05:45.797019   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.099] I0916 10:05:45.861801   69148 loader.go:375] Config loaded from file:  /tmp/tmp.7jO9JdA8X2/.kube/config
W0916 10:05:48.100] E0916 10:05:45.899911   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.100] E0916 10:05:45.991408   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.101] E0916 10:05:46.103587   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.101] E0916 10:05:46.798768   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.101] E0916 10:05:46.901315   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.101] E0916 10:05:46.992749   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.102] E0916 10:05:47.105192   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.102] E0916 10:05:47.800352   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.102] E0916 10:05:47.902624   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.102] E0916 10:05:47.994229   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.107] E0916 10:05:48.107034   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:05:48.208] secret/secret-string-data created
I0916 10:05:48.260] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0916 10:05:48.350] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0916 10:05:48.444] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0916 10:05:48.522] (Bsecret "secret-string-data" deleted
I0916 10:05:48.617] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:05:48.777] (Bsecret "test-secret" deleted
I0916 10:05:48.863] namespace "test-secrets" deleted
W0916 10:05:48.964] E0916 10:05:48.801993   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.964] I0916 10:05:48.832466   53026 namespace_controller.go:171] Namespace has been deleted my-namespace
W0916 10:05:48.965] E0916 10:05:48.904657   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:48.996] E0916 10:05:48.995815   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:49.109] E0916 10:05:49.108419   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:49.352] I0916 10:05:49.351916   53026 namespace_controller.go:171] Namespace has been deleted kube-node-lease
W0916 10:05:49.353] I0916 10:05:49.351924   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628209-2503
W0916 10:05:49.353] I0916 10:05:49.351968   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628193-16391
W0916 10:05:49.362] I0916 10:05:49.361918   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628190-6656
W0916 10:05:49.363] I0916 10:05:49.361918   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628188-6615
W0916 10:05:49.367] I0916 10:05:49.366512   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628203-24551
... skipping 8 lines ...
W0916 10:05:49.624] I0916 10:05:49.623437   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628232-23312
W0916 10:05:49.626] I0916 10:05:49.626033   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628240-194
W0916 10:05:49.632] I0916 10:05:49.632012   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628240-1398
W0916 10:05:49.637] I0916 10:05:49.637030   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628237-11587
W0916 10:05:49.643] I0916 10:05:49.642450   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628233-9829
W0916 10:05:49.730] I0916 10:05:49.729545   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628243-15454
W0916 10:05:49.804] E0916 10:05:49.803990   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:49.879] I0916 10:05:49.878750   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628265-18185
W0916 10:05:49.885] I0916 10:05:49.884507   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628264-5447
W0916 10:05:49.886] I0916 10:05:49.886309   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628246-2262
W0916 10:05:49.907] E0916 10:05:49.906387   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:49.919] I0916 10:05:49.919237   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628285-10030
W0916 10:05:49.929] I0916 10:05:49.928680   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628281-23522
W0916 10:05:49.929] I0916 10:05:49.928758   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628246-22260
W0916 10:05:49.929] I0916 10:05:49.928760   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628275-17295
W0916 10:05:49.932] I0916 10:05:49.932327   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628266-29930
W0916 10:05:49.935] I0916 10:05:49.935266   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628281-15660
W0916 10:05:49.974] I0916 10:05:49.973657   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628285-28276
W0916 10:05:49.998] E0916 10:05:49.997369   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:50.066] I0916 10:05:50.065323   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628286-7041
W0916 10:05:50.066] I0916 10:05:50.065346   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628288-18399
W0916 10:05:50.088] I0916 10:05:50.088014   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628289-12263
W0916 10:05:50.105] I0916 10:05:50.104751   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628322-22203
W0916 10:05:50.110] E0916 10:05:50.110230   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:50.160] I0916 10:05:50.160310   53026 namespace_controller.go:171] Namespace has been deleted namespace-1568628322-7190
W0916 10:05:50.534] I0916 10:05:50.533775   53026 namespace_controller.go:171] Namespace has been deleted other
W0916 10:05:50.806] E0916 10:05:50.805433   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:50.908] E0916 10:05:50.907787   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:50.999] E0916 10:05:50.998739   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:51.112] E0916 10:05:51.112020   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:51.807] E0916 10:05:51.807088   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:51.910] E0916 10:05:51.909405   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:52.000] E0916 10:05:52.000158   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:52.114] E0916 10:05:52.113400   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:52.809] E0916 10:05:52.808847   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:52.911] E0916 10:05:52.910960   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:53.002] E0916 10:05:53.001798   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:53.116] E0916 10:05:53.115183   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:53.811] E0916 10:05:53.810649   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:53.913] E0916 10:05:53.912319   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:54.004] E0916 10:05:54.003305   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:05:54.104] +++ exit code: 0
I0916 10:05:54.104] Recording: run_configmap_tests
I0916 10:05:54.105] Running command: run_configmap_tests
I0916 10:05:54.105] 
I0916 10:05:54.105] +++ Running case: test-cmd.run_configmap_tests 
I0916 10:05:54.105] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:05:54.105] +++ command: run_configmap_tests
I0916 10:05:54.105] +++ [0916 10:05:54] Creating namespace namespace-1568628354-28362
I0916 10:05:54.166] namespace/namespace-1568628354-28362 created
I0916 10:05:54.260] Context "test" modified.
I0916 10:05:54.267] +++ [0916 10:05:54] Testing configmaps
W0916 10:05:54.368] E0916 10:05:54.117068   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:05:54.494] configmap/test-configmap created
I0916 10:05:54.595] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0916 10:05:54.678] (Bconfigmap "test-configmap" deleted
I0916 10:05:54.786] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0916 10:05:54.867] (Bnamespace/test-configmaps created
I0916 10:05:54.961] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0916 10:05:55.321] configmap/test-binary-configmap created
I0916 10:05:55.413] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0916 10:05:55.502] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0916 10:05:55.753] (Bconfigmap "test-configmap" deleted
I0916 10:05:55.837] configmap "test-binary-configmap" deleted
I0916 10:05:55.924] namespace "test-configmaps" deleted
W0916 10:05:56.025] E0916 10:05:54.812118   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:56.025] E0916 10:05:54.913911   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:56.025] E0916 10:05:55.005364   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:56.026] E0916 10:05:55.119241   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:56.026] E0916 10:05:55.813783   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:56.026] E0916 10:05:55.915896   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:56.026] E0916 10:05:56.006932   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:56.122] E0916 10:05:56.121500   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:56.816] E0916 10:05:56.815383   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:56.918] E0916 10:05:56.917625   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:57.009] E0916 10:05:57.008667   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:57.123] E0916 10:05:57.122979   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:57.817] E0916 10:05:57.816867   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:57.920] E0916 10:05:57.919372   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:58.011] E0916 10:05:58.010284   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:58.125] E0916 10:05:58.124868   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:58.819] E0916 10:05:58.818266   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:58.921] E0916 10:05:58.920680   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:58.975] I0916 10:05:58.975106   53026 namespace_controller.go:171] Namespace has been deleted test-secrets
W0916 10:05:59.013] E0916 10:05:59.012311   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:59.127] E0916 10:05:59.126764   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:59.820] E0916 10:05:59.819754   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:05:59.922] E0916 10:05:59.922265   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:00.014] E0916 10:06:00.014159   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:00.128] E0916 10:06:00.128088   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:00.822] E0916 10:06:00.821506   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:00.924] E0916 10:06:00.923743   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:01.015] E0916 10:06:01.015113   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:01.116] +++ exit code: 0
I0916 10:06:01.116] Recording: run_client_config_tests
I0916 10:06:01.117] Running command: run_client_config_tests
I0916 10:06:01.117] 
I0916 10:06:01.117] +++ Running case: test-cmd.run_client_config_tests 
I0916 10:06:01.117] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:06:01.117] +++ command: run_client_config_tests
I0916 10:06:01.127] +++ [0916 10:06:01] Creating namespace namespace-1568628361-12449
I0916 10:06:01.204] namespace/namespace-1568628361-12449 created
I0916 10:06:01.277] Context "test" modified.
I0916 10:06:01.284] +++ [0916 10:06:01] Testing client config
I0916 10:06:01.356] Successful
I0916 10:06:01.356] message:error: stat missing: no such file or directory
I0916 10:06:01.356] has:missing: no such file or directory
I0916 10:06:01.429] Successful
I0916 10:06:01.429] message:error: stat missing: no such file or directory
I0916 10:06:01.429] has:missing: no such file or directory
I0916 10:06:01.499] Successful
I0916 10:06:01.500] message:error: stat missing: no such file or directory
I0916 10:06:01.500] has:missing: no such file or directory
I0916 10:06:01.575] Successful
I0916 10:06:01.576] message:Error in configuration: context was not found for specified context: missing-context
I0916 10:06:01.576] has:context was not found for specified context: missing-context
I0916 10:06:01.647] Successful
I0916 10:06:01.648] message:error: no server found for cluster "missing-cluster"
I0916 10:06:01.648] has:no server found for cluster "missing-cluster"
I0916 10:06:01.729] Successful
I0916 10:06:01.730] message:error: auth info "missing-user" does not exist
I0916 10:06:01.730] has:auth info "missing-user" does not exist
W0916 10:06:01.830] E0916 10:06:01.129501   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:01.831] E0916 10:06:01.823120   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:01.925] E0916 10:06:01.925045   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:02.017] E0916 10:06:02.016797   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:02.118] Successful
I0916 10:06:02.119] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0916 10:06:02.119] has:error loading config file
I0916 10:06:02.119] Successful
I0916 10:06:02.119] message:error: stat missing-config: no such file or directory
I0916 10:06:02.119] has:no such file or directory
I0916 10:06:02.119] +++ exit code: 0
I0916 10:06:02.119] Recording: run_service_accounts_tests
I0916 10:06:02.120] Running command: run_service_accounts_tests
I0916 10:06:02.120] 
I0916 10:06:02.120] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 6 lines ...
I0916 10:06:02.313] core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
I0916 10:06:02.392] (Bnamespace/test-service-accounts created
I0916 10:06:02.489] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0916 10:06:02.565] (Bserviceaccount/test-service-account created
I0916 10:06:02.663] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0916 10:06:02.756] (Bserviceaccount "test-service-account" deleted
W0916 10:06:02.857] E0916 10:06:02.130967   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:02.857] E0916 10:06:02.824844   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:02.928] E0916 10:06:02.927355   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:03.020] E0916 10:06:03.020073   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:03.121] namespace "test-service-accounts" deleted
W0916 10:06:03.222] E0916 10:06:03.136744   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:03.827] E0916 10:06:03.827023   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:03.930] E0916 10:06:03.929875   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:04.025] E0916 10:06:04.024434   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:04.138] E0916 10:06:04.138144   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:04.829] E0916 10:06:04.829104   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:04.932] E0916 10:06:04.931686   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:05.028] E0916 10:06:05.027445   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:05.140] E0916 10:06:05.139557   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:05.831] E0916 10:06:05.830827   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:05.934] E0916 10:06:05.933404   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:06.020] I0916 10:06:06.020068   53026 namespace_controller.go:171] Namespace has been deleted test-configmaps
W0916 10:06:06.029] E0916 10:06:06.029164   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:06.141] E0916 10:06:06.141014   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:06.833] E0916 10:06:06.832348   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:06.935] E0916 10:06:06.934837   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:07.031] E0916 10:06:07.030880   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:07.142] E0916 10:06:07.142325   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:07.834] E0916 10:06:07.833672   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:07.936] E0916 10:06:07.935671   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:08.032] E0916 10:06:08.032172   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:08.133] +++ exit code: 0
I0916 10:06:08.134] Recording: run_job_tests
I0916 10:06:08.134] Running command: run_job_tests
I0916 10:06:08.134] 
I0916 10:06:08.135] +++ Running case: test-cmd.run_job_tests 
I0916 10:06:08.135] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0916 10:06:08.711] Labels:                        run=pi
I0916 10:06:08.711] Annotations:                   <none>
I0916 10:06:08.712] Schedule:                      59 23 31 2 *
I0916 10:06:08.712] Concurrency Policy:            Allow
I0916 10:06:08.712] Suspend:                       False
I0916 10:06:08.712] Successful Job History Limit:  3
I0916 10:06:08.713] Failed Job History Limit:      1
I0916 10:06:08.713] Starting Deadline Seconds:     <unset>
I0916 10:06:08.713] Selector:                      <unset>
I0916 10:06:08.713] Parallelism:                   <unset>
I0916 10:06:08.713] Completions:                   <unset>
I0916 10:06:08.713] Pod Template:
I0916 10:06:08.713]   Labels:  run=pi
... skipping 32 lines ...
I0916 10:06:09.178]                 run=pi
I0916 10:06:09.178] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0916 10:06:09.178] Controlled By:  CronJob/pi
I0916 10:06:09.179] Parallelism:    1
I0916 10:06:09.179] Completions:    1
I0916 10:06:09.179] Start Time:     Mon, 16 Sep 2019 10:06:08 +0000
I0916 10:06:09.179] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0916 10:06:09.179] Pod Template:
I0916 10:06:09.179]   Labels:  controller-uid=bf776cfe-4876-4099-8aa4-0e427ac6db5d
I0916 10:06:09.179]            job-name=test-job
I0916 10:06:09.179]            run=pi
I0916 10:06:09.180]   Containers:
I0916 10:06:09.180]    pi:
... skipping 15 lines ...
I0916 10:06:09.182]   Type    Reason            Age   From            Message
I0916 10:06:09.182]   ----    ------            ----  ----            -------
I0916 10:06:09.182]   Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-pktjg
I0916 10:06:09.249] job.batch "test-job" deleted
I0916 10:06:09.325] cronjob.batch "pi" deleted
I0916 10:06:09.399] namespace "test-jobs" deleted
W0916 10:06:09.500] E0916 10:06:08.143506   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:09.500] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 10:06:09.501] E0916 10:06:08.834661   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:09.501] E0916 10:06:08.937200   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:09.501] I0916 10:06:08.944804   53026 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"bf776cfe-4876-4099-8aa4-0e427ac6db5d", APIVersion:"batch/v1", ResourceVersion:"1395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pktjg
W0916 10:06:09.501] E0916 10:06:09.033471   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:09.502] E0916 10:06:09.144982   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:09.836] E0916 10:06:09.836274   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:09.939] E0916 10:06:09.938596   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:10.035] E0916 10:06:10.034406   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:10.146] E0916 10:06:10.146359   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:10.838] E0916 10:06:10.837663   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:10.940] E0916 10:06:10.939962   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:11.036] E0916 10:06:11.035834   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:11.148] E0916 10:06:11.147945   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:11.839] E0916 10:06:11.839127   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:11.942] E0916 10:06:11.941353   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:12.037] E0916 10:06:12.037174   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:12.150] E0916 10:06:12.149485   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:12.841] E0916 10:06:12.840402   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:12.943] I0916 10:06:12.943051   53026 namespace_controller.go:171] Namespace has been deleted test-service-accounts
W0916 10:06:12.944] E0916 10:06:12.943119   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:13.039] E0916 10:06:13.038524   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:13.152] E0916 10:06:13.151300   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:13.842] E0916 10:06:13.841882   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:13.945] E0916 10:06:13.944665   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:14.040] E0916 10:06:14.040137   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:14.153] E0916 10:06:14.152869   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:14.550] +++ exit code: 0
I0916 10:06:14.589] Recording: run_create_job_tests
I0916 10:06:14.589] Running command: run_create_job_tests
I0916 10:06:14.615] 
I0916 10:06:14.618] +++ Running case: test-cmd.run_create_job_tests 
I0916 10:06:14.621] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 27 lines ...
I0916 10:06:16.063] +++ [0916 10:06:16] Testing pod templates
I0916 10:06:16.162] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:06:16.322] (Bpodtemplate/nginx created
I0916 10:06:16.421] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 10:06:16.497] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0916 10:06:16.498] nginx   nginx        nginx    name=nginx
W0916 10:06:16.598] E0916 10:06:14.843212   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:16.599] I0916 10:06:14.900680   53026 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568628374-26242", Name:"test-job", UID:"f894e7cc-34c8-41a7-8b91-82ad1bfa23f4", APIVersion:"batch/v1", ResourceVersion:"1414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-rlbtz
W0916 10:06:16.600] E0916 10:06:14.946129   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:16.600] E0916 10:06:15.041697   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:16.601] E0916 10:06:15.154364   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:16.601] I0916 10:06:15.172770   53026 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568628374-26242", Name:"test-job-pi", UID:"a82ed54a-64e4-45c5-a1b7-bf3db35ffc4d", APIVersion:"batch/v1", ResourceVersion:"1421", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-tr7sq
W0916 10:06:16.602] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 10:06:16.602] I0916 10:06:15.537596   53026 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568628374-26242", Name:"my-pi", UID:"f42ac1db-980c-4058-bd5f-b9015b2aaa3e", APIVersion:"batch/v1", ResourceVersion:"1429", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-tc2c4
W0916 10:06:16.603] E0916 10:06:15.845096   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:16.603] E0916 10:06:15.947766   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:16.603] E0916 10:06:16.043247   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:16.604] E0916 10:06:16.155912   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:16.604] I0916 10:06:16.318258   49471 controller.go:606] quota admission added evaluator for: podtemplates
I0916 10:06:16.705] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 10:06:16.761] (Bpodtemplate "nginx" deleted
I0916 10:06:16.858] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:06:16.873] (B+++ exit code: 0
I0916 10:06:16.909] Recording: run_service_tests
... skipping 66 lines ...
I0916 10:06:17.826] Port:              <unset>  6379/TCP
I0916 10:06:17.826] TargetPort:        6379/TCP
I0916 10:06:17.826] Endpoints:         <none>
I0916 10:06:17.826] Session Affinity:  None
I0916 10:06:17.827] Events:            <none>
I0916 10:06:17.827] (B
W0916 10:06:17.927] E0916 10:06:16.846565   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:17.928] E0916 10:06:16.949206   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:17.928] E0916 10:06:17.044980   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:17.929] E0916 10:06:17.157380   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:17.929] E0916 10:06:17.848324   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:17.951] E0916 10:06:17.950605   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:18.047] E0916 10:06:18.046829   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:18.148] Successful describe services:
I0916 10:06:18.148] Name:              kubernetes
I0916 10:06:18.148] Namespace:         default
I0916 10:06:18.148] Labels:            component=apiserver
I0916 10:06:18.149]                    provider=kubernetes
I0916 10:06:18.149] Annotations:       <none>
... skipping 178 lines ...
I0916 10:06:18.950]   selector:
I0916 10:06:18.950]     role: padawan
I0916 10:06:18.950]   sessionAffinity: None
I0916 10:06:18.950]   type: ClusterIP
I0916 10:06:18.950] status:
I0916 10:06:18.950]   loadBalancer: {}
W0916 10:06:19.051] E0916 10:06:18.158849   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:19.051] E0916 10:06:18.849529   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:19.052] E0916 10:06:18.951993   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:19.052] error: you must specify resources by --filename when --local is set.
W0916 10:06:19.052] Example resource specifications include:
W0916 10:06:19.052]    '-f rsrc.yaml'
W0916 10:06:19.052]    '--filename=rsrc.json'
W0916 10:06:19.052] E0916 10:06:19.048209   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:19.153] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0916 10:06:19.283] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0916 10:06:19.369] (Bservice "redis-master" deleted
I0916 10:06:19.468] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:06:19.558] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:06:19.721] (Bservice/redis-master created
... skipping 5 lines ...
I0916 10:06:20.420] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0916 10:06:20.506] (Bservice "redis-master" deleted
I0916 10:06:20.598] service "service-v1-test" deleted
I0916 10:06:20.696] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:06:20.795] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:06:20.956] (Bservice/redis-master created
W0916 10:06:21.057] E0916 10:06:19.160314   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:21.057] I0916 10:06:19.522981   53026 namespace_controller.go:171] Namespace has been deleted test-jobs
W0916 10:06:21.057] E0916 10:06:19.851401   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:21.058] E0916 10:06:19.953531   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:21.058] E0916 10:06:20.050097   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:21.058] E0916 10:06:20.161622   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:21.059] E0916 10:06:20.852857   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:21.059] E0916 10:06:20.954665   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:21.059] E0916 10:06:21.052394   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:21.160] service/redis-slave created
I0916 10:06:21.232] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0916 10:06:21.320] (BSuccessful
I0916 10:06:21.320] message:NAME           RSRC
I0916 10:06:21.320] kubernetes     145
I0916 10:06:21.320] redis-master   1465
... skipping 15 lines ...
I0916 10:06:22.445] core.sh:1013: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
I0916 10:06:22.534] (Bcore.sh:1014: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
I0916 10:06:22.633] (Bservice/exposemetadata exposed
I0916 10:06:22.733] core.sh:1020: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
I0916 10:06:22.824] (Bservice "exposemetadata" deleted
I0916 10:06:22.832] service "testmetadata" deleted
W0916 10:06:22.933] E0916 10:06:21.163086   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:22.934] E0916 10:06:21.854425   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:22.935] E0916 10:06:21.956355   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:22.935] E0916 10:06:22.053547   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:22.935] E0916 10:06:22.164616   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:22.936] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 10:06:22.936] I0916 10:06:22.332299   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"76c31406-14e1-4dcd-ad9e-4072f7236b8d", APIVersion:"apps/v1", ResourceVersion:"1481", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W0916 10:06:22.937] I0916 10:06:22.338548   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"7292a0c6-21fc-423e-ab16-fd1e1b7846aa", APIVersion:"apps/v1", ResourceVersion:"1482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-t6lbm
W0916 10:06:22.937] I0916 10:06:22.341847   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"7292a0c6-21fc-423e-ab16-fd1e1b7846aa", APIVersion:"apps/v1", ResourceVersion:"1482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-69rm8
W0916 10:06:22.938] E0916 10:06:22.857039   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:22.958] E0916 10:06:22.957780   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:23.056] E0916 10:06:23.055296   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:23.157] deployment.apps "testmetadata" deleted
I0916 10:06:23.157] +++ exit code: 0
I0916 10:06:23.157] Recording: run_daemonset_tests
I0916 10:06:23.157] Running command: run_daemonset_tests
I0916 10:06:23.157] 
I0916 10:06:23.158] +++ Running case: test-cmd.run_daemonset_tests 
... skipping 2 lines ...
I0916 10:06:23.158] +++ [0916 10:06:23] Creating namespace namespace-1568628383-20156
I0916 10:06:23.158] namespace/namespace-1568628383-20156 created
I0916 10:06:23.213] Context "test" modified.
I0916 10:06:23.220] +++ [0916 10:06:23] Testing kubectl(v1:daemonsets)
I0916 10:06:23.325] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:06:23.514] (Bdaemonset.apps/bind created
W0916 10:06:23.616] E0916 10:06:23.166342   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:23.616] I0916 10:06:23.510692   49471 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0916 10:06:23.616] I0916 10:06:23.524635   49471 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0916 10:06:23.717] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I0916 10:06:23.820] (Bdaemonset.apps/bind configured
W0916 10:06:23.921] E0916 10:06:23.858532   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:23.959] E0916 10:06:23.959228   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:24.057] E0916 10:06:24.056893   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:24.158] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I0916 10:06:24.158] (Bdaemonset.apps/bind image updated
I0916 10:06:24.159] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I0916 10:06:24.251] (Bdaemonset.apps/bind env updated
W0916 10:06:24.352] E0916 10:06:24.167963   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:24.453] apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
I0916 10:06:24.463] (Bdaemonset.apps/bind resource requirements updated
I0916 10:06:24.569] apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
I0916 10:06:24.676] (Bdaemonset.apps/bind restarted
I0916 10:06:24.782] apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
I0916 10:06:24.868] (Bdaemonset.apps "bind" deleted
... skipping 37 lines ...
I0916 10:06:26.656] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0916 10:06:26.749] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0916 10:06:26.853] (Bdaemonset.apps/bind rolled back
I0916 10:06:26.958] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 10:06:27.058] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 10:06:27.176] (BSuccessful
I0916 10:06:27.176] message:error: unable to find specified revision 1000000 in history
I0916 10:06:27.176] has:unable to find specified revision
I0916 10:06:27.266] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 10:06:27.358] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 10:06:27.461] (Bdaemonset.apps/bind rolled back
I0916 10:06:27.557] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0916 10:06:27.648] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0916 10:06:29.013] Namespace:    namespace-1568628387-16255
I0916 10:06:29.014] Selector:     app=guestbook,tier=frontend
I0916 10:06:29.014] Labels:       app=guestbook
I0916 10:06:29.014]               tier=frontend
I0916 10:06:29.014] Annotations:  <none>
I0916 10:06:29.014] Replicas:     3 current / 3 desired
I0916 10:06:29.014] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:06:29.014] Pod Template:
I0916 10:06:29.015]   Labels:  app=guestbook
I0916 10:06:29.015]            tier=frontend
I0916 10:06:29.015]   Containers:
I0916 10:06:29.015]    php-redis:
I0916 10:06:29.015]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 10:06:29.158] Namespace:    namespace-1568628387-16255
I0916 10:06:29.158] Selector:     app=guestbook,tier=frontend
I0916 10:06:29.158] Labels:       app=guestbook
I0916 10:06:29.158]               tier=frontend
I0916 10:06:29.158] Annotations:  <none>
I0916 10:06:29.158] Replicas:     3 current / 3 desired
I0916 10:06:29.158] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:06:29.158] Pod Template:
I0916 10:06:29.159]   Labels:  app=guestbook
I0916 10:06:29.159]            tier=frontend
I0916 10:06:29.159]   Containers:
I0916 10:06:29.159]    php-redis:
I0916 10:06:29.159]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0916 10:06:29.160]   Type    Reason            Age   From                    Message
I0916 10:06:29.160]   ----    ------            ----  ----                    -------
I0916 10:06:29.161]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-vmb2s
I0916 10:06:29.161]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-v6d8f
I0916 10:06:29.161]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-dsqlr
I0916 10:06:29.161] (B
W0916 10:06:29.262] E0916 10:06:24.860147   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.262] E0916 10:06:24.961005   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.262] E0916 10:06:25.058623   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.263] E0916 10:06:25.169497   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.263] E0916 10:06:25.862116   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.263] E0916 10:06:25.962790   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.264] E0916 10:06:26.060264   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.264] E0916 10:06:26.172833   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.264] E0916 10:06:26.873696   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.269] E0916 10:06:26.877109   53026 daemon_controller.go:302] namespace-1568628384-11266/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1568628384-11266", SelfLink:"/apis/apps/v1/namespaces/namespace-1568628384-11266/daemonsets/bind", UID:"3eed515a-27ea-4d32-b54d-d943720a629e", ResourceVersion:"1546", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704225185, loc:(*time.Location)(0x7751f40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1568628384-11266\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0013b5640), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002605078), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fbfd40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0013b5660), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000f7a4e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0026050cc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0916 10:06:29.270] E0916 10:06:26.964361   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.270] E0916 10:06:27.062064   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.270] E0916 10:06:27.174255   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.276] E0916 10:06:27.475275   53026 daemon_controller.go:302] namespace-1568628384-11266/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1568628384-11266", SelfLink:"/apis/apps/v1/namespaces/namespace-1568628384-11266/daemonsets/bind", UID:"3eed515a-27ea-4d32-b54d-d943720a629e", ResourceVersion:"1550", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704225185, loc:(*time.Location)(0x7751f40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1568628384-11266\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0016f2940), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020b91d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002402180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0016f2960), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002886c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0020b924c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0916 10:06:29.277] E0916 10:06:27.875899   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.277] E0916 10:06:27.965972   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.277] E0916 10:06:28.063583   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.278] E0916 10:06:28.175914   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.278] I0916 10:06:28.330280   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"91f285bb-f1cb-4e59-8fbe-79c73ce32421", APIVersion:"v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-g9mnw
W0916 10:06:29.279] I0916 10:06:28.333696   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"91f285bb-f1cb-4e59-8fbe-79c73ce32421", APIVersion:"v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8q889
W0916 10:06:29.279] I0916 10:06:28.334400   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"91f285bb-f1cb-4e59-8fbe-79c73ce32421", APIVersion:"v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-95g79
W0916 10:06:29.280] I0916 10:06:28.760823   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"a0b143f5-2a35-424e-8ebc-65fbd477475a", APIVersion:"v1", ResourceVersion:"1574", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vmb2s
W0916 10:06:29.280] I0916 10:06:28.765453   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"a0b143f5-2a35-424e-8ebc-65fbd477475a", APIVersion:"v1", ResourceVersion:"1574", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v6d8f
W0916 10:06:29.281] I0916 10:06:28.766641   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"a0b143f5-2a35-424e-8ebc-65fbd477475a", APIVersion:"v1", ResourceVersion:"1574", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dsqlr
W0916 10:06:29.281] E0916 10:06:28.877516   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.281] E0916 10:06:28.967276   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.282] E0916 10:06:29.065329   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:29.282] E0916 10:06:29.177786   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:29.383] core.sh:1065: Successful describe
I0916 10:06:29.384] Name:         frontend
I0916 10:06:29.384] Namespace:    namespace-1568628387-16255
I0916 10:06:29.384] Selector:     app=guestbook,tier=frontend
I0916 10:06:29.384] Labels:       app=guestbook
I0916 10:06:29.384]               tier=frontend
I0916 10:06:29.384] Annotations:  <none>
I0916 10:06:29.384] Replicas:     3 current / 3 desired
I0916 10:06:29.385] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:06:29.385] Pod Template:
I0916 10:06:29.385]   Labels:  app=guestbook
I0916 10:06:29.385]            tier=frontend
I0916 10:06:29.385]   Containers:
I0916 10:06:29.385]    php-redis:
I0916 10:06:29.385]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0916 10:06:29.393] Namespace:    namespace-1568628387-16255
I0916 10:06:29.393] Selector:     app=guestbook,tier=frontend
I0916 10:06:29.393] Labels:       app=guestbook
I0916 10:06:29.394]               tier=frontend
I0916 10:06:29.394] Annotations:  <none>
I0916 10:06:29.394] Replicas:     3 current / 3 desired
I0916 10:06:29.395] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:06:29.395] Pod Template:
I0916 10:06:29.395]   Labels:  app=guestbook
I0916 10:06:29.395]            tier=frontend
I0916 10:06:29.395]   Containers:
I0916 10:06:29.395]    php-redis:
I0916 10:06:29.395]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0916 10:06:29.546] Namespace:    namespace-1568628387-16255
I0916 10:06:29.546] Selector:     app=guestbook,tier=frontend
I0916 10:06:29.546] Labels:       app=guestbook
I0916 10:06:29.547]               tier=frontend
I0916 10:06:29.547] Annotations:  <none>
I0916 10:06:29.547] Replicas:     3 current / 3 desired
I0916 10:06:29.547] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:06:29.547] Pod Template:
I0916 10:06:29.547]   Labels:  app=guestbook
I0916 10:06:29.547]            tier=frontend
I0916 10:06:29.547]   Containers:
I0916 10:06:29.547]    php-redis:
I0916 10:06:29.547]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 10:06:29.658] Namespace:    namespace-1568628387-16255
I0916 10:06:29.658] Selector:     app=guestbook,tier=frontend
I0916 10:06:29.658] Labels:       app=guestbook
I0916 10:06:29.658]               tier=frontend
I0916 10:06:29.658] Annotations:  <none>
I0916 10:06:29.658] Replicas:     3 current / 3 desired
I0916 10:06:29.658] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:06:29.658] Pod Template:
I0916 10:06:29.659]   Labels:  app=guestbook
I0916 10:06:29.659]            tier=frontend
I0916 10:06:29.659]   Containers:
I0916 10:06:29.659]    php-redis:
I0916 10:06:29.659]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 10:06:29.768] Namespace:    namespace-1568628387-16255
I0916 10:06:29.769] Selector:     app=guestbook,tier=frontend
I0916 10:06:29.769] Labels:       app=guestbook
I0916 10:06:29.769]               tier=frontend
I0916 10:06:29.769] Annotations:  <none>
I0916 10:06:29.769] Replicas:     3 current / 3 desired
I0916 10:06:29.769] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:06:29.769] Pod Template:
I0916 10:06:29.770]   Labels:  app=guestbook
I0916 10:06:29.770]            tier=frontend
I0916 10:06:29.770]   Containers:
I0916 10:06:29.770]    php-redis:
I0916 10:06:29.770]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0916 10:06:29.882] Namespace:    namespace-1568628387-16255
I0916 10:06:29.882] Selector:     app=guestbook,tier=frontend
I0916 10:06:29.882] Labels:       app=guestbook
I0916 10:06:29.883]               tier=frontend
I0916 10:06:29.883] Annotations:  <none>
I0916 10:06:29.883] Replicas:     3 current / 3 desired
I0916 10:06:29.883] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:06:29.883] Pod Template:
I0916 10:06:29.883]   Labels:  app=guestbook
I0916 10:06:29.883]            tier=frontend
I0916 10:06:29.884]   Containers:
I0916 10:06:29.884]    php-redis:
I0916 10:06:29.884]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 19 lines ...
I0916 10:06:30.432] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
I0916 10:06:30.521] (Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0916 10:06:30.605] (Breplicationcontroller/frontend scaled
I0916 10:06:30.694] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0916 10:06:30.788] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0916 10:06:30.888] (Breplicationcontroller/frontend scaled
W0916 10:06:30.989] E0916 10:06:29.879129   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:30.990] E0916 10:06:29.968690   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:30.991] E0916 10:06:30.066545   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:30.992] I0916 10:06:30.071180   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"a0b143f5-2a35-424e-8ebc-65fbd477475a", APIVersion:"v1", ResourceVersion:"1584", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-vmb2s
W0916 10:06:30.992] E0916 10:06:30.179254   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:30.993] error: Expected replicas to be 3, was 2
W0916 10:06:30.994] I0916 10:06:30.607419   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"a0b143f5-2a35-424e-8ebc-65fbd477475a", APIVersion:"v1", ResourceVersion:"1590", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6v4dv
W0916 10:06:30.994] E0916 10:06:30.880836   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:30.995] I0916 10:06:30.896637   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"a0b143f5-2a35-424e-8ebc-65fbd477475a", APIVersion:"v1", ResourceVersion:"1596", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-6v4dv
W0916 10:06:30.996] E0916 10:06:30.970866   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:31.069] E0916 10:06:31.068537   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:31.170] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0916 10:06:31.171] (Breplicationcontroller "frontend" deleted
W0916 10:06:31.271] E0916 10:06:31.181079   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:31.319] I0916 10:06:31.318457   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"redis-master", UID:"0cbff3ff-3e1a-4865-833c-4fb16ffd5940", APIVersion:"v1", ResourceVersion:"1607", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-kn48q
I0916 10:06:31.421] replicationcontroller/redis-master created
I0916 10:06:31.545] replicationcontroller/redis-slave created
W0916 10:06:31.646] I0916 10:06:31.550782   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"redis-slave", UID:"a1fcabc1-e192-48b5-9dff-02b4d4485f50", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-s8j4p
W0916 10:06:31.647] I0916 10:06:31.556157   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"redis-slave", UID:"a1fcabc1-e192-48b5-9dff-02b4d4485f50", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-477dn
W0916 10:06:31.692] I0916 10:06:31.690781   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"redis-master", UID:"0cbff3ff-3e1a-4865-833c-4fb16ffd5940", APIVersion:"v1", ResourceVersion:"1619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-nhjdh
... skipping 4 lines ...
I0916 10:06:31.804] replicationcontroller/redis-master scaled
I0916 10:06:31.805] replicationcontroller/redis-slave scaled
I0916 10:06:31.831] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
I0916 10:06:31.969] (Bcore.sh:1118: Successful get rc redis-slave {{.spec.replicas}}: 4
I0916 10:06:32.094] (Breplicationcontroller "redis-master" deleted
I0916 10:06:32.103] replicationcontroller "redis-slave" deleted
W0916 10:06:32.204] E0916 10:06:31.883541   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:32.204] E0916 10:06:31.972985   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:32.205] E0916 10:06:32.071033   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:32.205] E0916 10:06:32.182779   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:32.288] I0916 10:06:32.287866   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment", UID:"fe118a3e-cc56-499c-8746-3e804c781377", APIVersion:"apps/v1", ResourceVersion:"1654", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0916 10:06:32.293] I0916 10:06:32.292446   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment-6986c7bc94", UID:"5daeef2d-bea4-48db-b7c1-4d27bf1d9bc6", APIVersion:"apps/v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-7dqv6
W0916 10:06:32.297] I0916 10:06:32.296434   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment-6986c7bc94", UID:"5daeef2d-bea4-48db-b7c1-4d27bf1d9bc6", APIVersion:"apps/v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-l8pc8
W0916 10:06:32.298] I0916 10:06:32.297603   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment-6986c7bc94", UID:"5daeef2d-bea4-48db-b7c1-4d27bf1d9bc6", APIVersion:"apps/v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-fkh4b
W0916 10:06:32.395] I0916 10:06:32.394456   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment", UID:"fe118a3e-cc56-499c-8746-3e804c781377", APIVersion:"apps/v1", ResourceVersion:"1668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W0916 10:06:32.407] I0916 10:06:32.406182   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment-6986c7bc94", UID:"5daeef2d-bea4-48db-b7c1-4d27bf1d9bc6", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-fkh4b
... skipping 4 lines ...
I0916 10:06:32.568] (Bdeployment.apps "nginx-deployment" deleted
I0916 10:06:32.669] Successful
I0916 10:06:32.670] message:service/expose-test-deployment exposed
I0916 10:06:32.670] has:service/expose-test-deployment exposed
I0916 10:06:32.761] service "expose-test-deployment" deleted
I0916 10:06:32.866] Successful
I0916 10:06:32.867] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0916 10:06:32.867] See 'kubectl expose -h' for help and examples
I0916 10:06:32.867] has:invalid deployment: no selectors
W0916 10:06:32.968] E0916 10:06:32.885439   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:32.975] E0916 10:06:32.974779   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:33.052] I0916 10:06:33.051614   53026 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment", UID:"2fdb9bcb-191d-4195-aaf5-2a0292f6d641", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0916 10:06:33.056] I0916 10:06:33.055792   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment-6986c7bc94", UID:"f002386e-9ce1-4906-9b56-9a215b5897b4", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-2gbj7
W0916 10:06:33.060] I0916 10:06:33.059643   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment-6986c7bc94", UID:"f002386e-9ce1-4906-9b56-9a215b5897b4", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-nsfls
W0916 10:06:33.061] I0916 10:06:33.060049   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568628387-16255", Name:"nginx-deployment-6986c7bc94", UID:"f002386e-9ce1-4906-9b56-9a215b5897b4", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-tn98f
W0916 10:06:33.073] E0916 10:06:33.072192   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:06:33.173] deployment.apps/nginx-deployment created
I0916 10:06:33.174] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0916 10:06:33.271] (Bservice/nginx-deployment exposed
I0916 10:06:33.371] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0916 10:06:33.464] (Bdeployment.apps "nginx-deployment" deleted
I0916 10:06:33.475] service "nginx-deployment" deleted
W0916 10:06:33.576] E0916 10:06:33.184435   53026 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:06:33.658] I0916 10:06:33.658024   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"1eb6ee34-0b51-4ff0-8c49-395feedc712a", APIVersion:"v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mvfjk
W0916 10:06:33.662] I0916 10:06:33.661792   53026 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568628387-16255", Name:"frontend", UID:"1eb6ee34-0b51-4ff0-8c49-395feedc712a", APIVersion:"v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nbsj2
W0916