This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2862 succeeded
Started2019-09-16 02:29
Elapsed27m34s
Revision
Buildergke-prow-ssd-pool-1a225945-86t9
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/1fb9d536-18e6-420b-adce-5193cd1b2bc8/targets/test'}}
podb6c20b33-d829-11e9-9f18-22ab134e4c57
resultstorehttps://source.cloud.google.com/results/invocations/1fb9d536-18e6-420b-adce-5193cd1b2bc8/targets/test
infra-commite1cbc3ccd
podb6c20b33-d829-11e9-9f18-22ab134e4c57
repok8s.io/kubernetes
repo-commitba07527278ef2cde9c27886ec3333cfef472112a
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0916 02:51:46.932265  108846 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0916 02:51:46.932285  108846 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0916 02:51:46.932299  108846 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0916 02:51:46.932319  108846 master.go:259] Using reconciler: 
I0916 02:51:46.935265  108846 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.936965  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.937551  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.939042  108846 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0916 02:51:46.939117  108846 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0916 02:51:46.939138  108846 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.940782  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.940820  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.941245  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.942600  108846 store.go:1342] Monitoring events count at <storage-prefix>//events
I0916 02:51:46.942672  108846 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.942823  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.942843  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.942944  108846 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0916 02:51:46.944874  108846 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0916 02:51:46.945116  108846 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.944973  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.945001  108846 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0916 02:51:46.945792  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.945955  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.946777  108846 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0916 02:51:46.946935  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.947017  108846 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.947165  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.947190  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.947292  108846 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0916 02:51:46.948028  108846 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0916 02:51:46.948244  108846 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.948332  108846 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0916 02:51:46.948371  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.948394  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.948918  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.950092  108846 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0916 02:51:46.950203  108846 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0916 02:51:46.950339  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.950895  108846 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.951038  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.951132  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.951188  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.952162  108846 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0916 02:51:46.952387  108846 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.952481  108846 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0916 02:51:46.952537  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.952557  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.953672  108846 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0916 02:51:46.954039  108846 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.953734  108846 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0916 02:51:46.954747  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.967315  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.955241  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.968206  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.973847  108846 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0916 02:51:46.974090  108846 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.974207  108846 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0916 02:51:46.974498  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.974910  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.977562  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.986872  108846 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0916 02:51:46.987335  108846 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.989830  108846 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0916 02:51:46.990325  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.990443  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.993853  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.994337  108846 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0916 02:51:46.994478  108846 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0916 02:51:46.994575  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:46.994795  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:46.994826  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:46.996780  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:46.999514  108846 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0916 02:51:46.999837  108846 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0916 02:51:46.999929  108846 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.000302  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.000330  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.001984  108846 watch_cache.go:405] Replace watchCache (rev: 30560) 
I0916 02:51:47.002468  108846 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0916 02:51:47.002819  108846 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.003072  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.003492  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.003222  108846 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0916 02:51:47.004537  108846 watch_cache.go:405] Replace watchCache (rev: 30561) 
I0916 02:51:47.006221  108846 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0916 02:51:47.006260  108846 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.006442  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.006459  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.006550  108846 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0916 02:51:47.007530  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.007557  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.009020  108846 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.009142  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.009161  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.009565  108846 watch_cache.go:405] Replace watchCache (rev: 30562) 
I0916 02:51:47.010932  108846 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0916 02:51:47.010956  108846 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0916 02:51:47.011463  108846 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.011759  108846 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.012554  108846 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0916 02:51:47.012741  108846 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.013555  108846 watch_cache.go:405] Replace watchCache (rev: 30562) 
I0916 02:51:47.013794  108846 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.014843  108846 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.015772  108846 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.017438  108846 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.017790  108846 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.018157  108846 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.019025  108846 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.019927  108846 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.020522  108846 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.022126  108846 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.022792  108846 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.023766  108846 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.024241  108846 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.025495  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.025875  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.026205  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.026480  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.026958  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.027243  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.027661  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.028649  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.029462  108846 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.030552  108846 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.031581  108846 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.032247  108846 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.032717  108846 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.033728  108846 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.034378  108846 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.035723  108846 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.036992  108846 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.037929  108846 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.039463  108846 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.040133  108846 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.040514  108846 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0916 02:51:47.040660  108846 master.go:461] Enabling API group "authentication.k8s.io".
I0916 02:51:47.040766  108846 master.go:461] Enabling API group "authorization.k8s.io".
I0916 02:51:47.041198  108846 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.041520  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.041772  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.042970  108846 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 02:51:47.043053  108846 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 02:51:47.043153  108846 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.043329  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.043348  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.044315  108846 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 02:51:47.044473  108846 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 02:51:47.044515  108846 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.044972  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.045007  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.045664  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.045864  108846 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 02:51:47.045886  108846 master.go:461] Enabling API group "autoscaling".
I0916 02:51:47.045906  108846 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 02:51:47.046060  108846 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.046096  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.046235  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.046255  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.048346  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.050424  108846 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0916 02:51:47.050637  108846 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.050792  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.050811  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.050904  108846 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0916 02:51:47.053047  108846 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0916 02:51:47.053082  108846 master.go:461] Enabling API group "batch".
I0916 02:51:47.053210  108846 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0916 02:51:47.053759  108846 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.054036  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.054176  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.054402  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.055700  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.055725  108846 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0916 02:51:47.055752  108846 master.go:461] Enabling API group "certificates.k8s.io".
I0916 02:51:47.055928  108846 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.055974  108846 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0916 02:51:47.056728  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.056758  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.057395  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.058256  108846 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0916 02:51:47.058426  108846 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0916 02:51:47.058503  108846 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.058848  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.058887  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.059809  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.061198  108846 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0916 02:51:47.061219  108846 master.go:461] Enabling API group "coordination.k8s.io".
I0916 02:51:47.061235  108846 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0916 02:51:47.061345  108846 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0916 02:51:47.061457  108846 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.061641  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.061662  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.062684  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.063645  108846 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0916 02:51:47.063680  108846 master.go:461] Enabling API group "extensions".
I0916 02:51:47.063775  108846 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0916 02:51:47.063895  108846 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.064958  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.065026  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.065994  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.066448  108846 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0916 02:51:47.066484  108846 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0916 02:51:47.067773  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.067103  108846 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.068201  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.068390  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.069532  108846 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0916 02:51:47.069664  108846 master.go:461] Enabling API group "networking.k8s.io".
I0916 02:51:47.069562  108846 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0916 02:51:47.069940  108846 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.070355  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.070405  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.070720  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.072200  108846 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0916 02:51:47.072358  108846 master.go:461] Enabling API group "node.k8s.io".
I0916 02:51:47.072315  108846 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0916 02:51:47.073809  108846 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.074058  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.074327  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.074731  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.077188  108846 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0916 02:51:47.077310  108846 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0916 02:51:47.077552  108846 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.077771  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.077841  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.080071  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.081832  108846 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0916 02:51:47.081973  108846 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0916 02:51:47.081992  108846 master.go:461] Enabling API group "policy".
I0916 02:51:47.082110  108846 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.082281  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.082299  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.083491  108846 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0916 02:51:47.083620  108846 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0916 02:51:47.083973  108846 watch_cache.go:405] Replace watchCache (rev: 30565) 
I0916 02:51:47.084989  108846 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.085190  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.085251  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.086971  108846 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0916 02:51:47.087044  108846 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.087211  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.087235  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.087327  108846 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0916 02:51:47.089043  108846 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0916 02:51:47.089253  108846 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.089404  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.089423  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.089506  108846 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0916 02:51:47.089985  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.091101  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.092788  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.093696  108846 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0916 02:51:47.093758  108846 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.093901  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.093921  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.094029  108846 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0916 02:51:47.094919  108846 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0916 02:51:47.095017  108846 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0916 02:51:47.095141  108846 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.095265  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.095284  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.096397  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.097784  108846 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0916 02:51:47.097821  108846 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.097849  108846 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0916 02:51:47.097950  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.097968  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.099215  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.099786  108846 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0916 02:51:47.099989  108846 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.100105  108846 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0916 02:51:47.100124  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.100198  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.101446  108846 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0916 02:51:47.101480  108846 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0916 02:51:47.103825  108846 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.103934  108846 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0916 02:51:47.103968  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.103988  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.105735  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.106243  108846 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0916 02:51:47.106724  108846 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.106938  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.106968  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.107030  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.107072  108846 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0916 02:51:47.108978  108846 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0916 02:51:47.108989  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.109000  108846 master.go:461] Enabling API group "scheduling.k8s.io".
I0916 02:51:47.109170  108846 master.go:450] Skipping disabled API group "settings.k8s.io".
I0916 02:51:47.109342  108846 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.109485  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.109503  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.109550  108846 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0916 02:51:47.110632  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.110806  108846 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0916 02:51:47.110934  108846 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0916 02:51:47.110986  108846 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.111107  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.111129  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.112095  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.113002  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.113239  108846 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0916 02:51:47.113305  108846 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.113505  108846 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0916 02:51:47.113789  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.113810  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.115287  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.115370  108846 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0916 02:51:47.115409  108846 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.115537  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.115559  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.115587  108846 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0916 02:51:47.117117  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.118024  108846 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0916 02:51:47.118106  108846 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0916 02:51:47.118479  108846 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.119010  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.119146  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.119271  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.120289  108846 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0916 02:51:47.120380  108846 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0916 02:51:47.120754  108846 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.121134  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.121250  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.121569  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.122494  108846 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0916 02:51:47.122689  108846 master.go:461] Enabling API group "storage.k8s.io".
I0916 02:51:47.122965  108846 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.123095  108846 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0916 02:51:47.123183  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.123203  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.127293  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.128835  108846 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0916 02:51:47.129194  108846 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.129482  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.129596  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.129738  108846 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0916 02:51:47.132208  108846 watch_cache.go:405] Replace watchCache (rev: 30570) 
I0916 02:51:47.135442  108846 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0916 02:51:47.135581  108846 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0916 02:51:47.138473  108846 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.138721  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.138750  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.146322  108846 watch_cache.go:405] Replace watchCache (rev: 30574) 
I0916 02:51:47.146912  108846 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0916 02:51:47.146972  108846 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0916 02:51:47.147190  108846 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.147394  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.147422  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.149927  108846 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0916 02:51:47.150275  108846 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0916 02:51:47.150638  108846 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.150810  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.150833  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.151435  108846 watch_cache.go:405] Replace watchCache (rev: 30580) 
I0916 02:51:47.154379  108846 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0916 02:51:47.154404  108846 master.go:461] Enabling API group "apps".
I0916 02:51:47.154458  108846 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.154638  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.154687  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.154718  108846 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0916 02:51:47.156037  108846 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0916 02:51:47.156108  108846 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.156248  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.156274  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.156420  108846 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0916 02:51:47.157644  108846 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0916 02:51:47.157687  108846 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.157815  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.157838  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.157960  108846 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0916 02:51:47.158684  108846 watch_cache.go:405] Replace watchCache (rev: 30586) 
I0916 02:51:47.159564  108846 watch_cache.go:405] Replace watchCache (rev: 30586) 
I0916 02:51:47.162660  108846 watch_cache.go:405] Replace watchCache (rev: 30586) 
I0916 02:51:47.162682  108846 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0916 02:51:47.162728  108846 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.162902  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.162930  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.163487  108846 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0916 02:51:47.164293  108846 watch_cache.go:405] Replace watchCache (rev: 30590) 
I0916 02:51:47.164737  108846 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0916 02:51:47.164761  108846 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0916 02:51:47.164799  108846 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.164816  108846 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0916 02:51:47.165649  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.165678  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.166218  108846 watch_cache.go:405] Replace watchCache (rev: 30591) 
I0916 02:51:47.167084  108846 watch_cache.go:405] Replace watchCache (rev: 30591) 
I0916 02:51:47.170420  108846 store.go:1342] Monitoring events count at <storage-prefix>//events
I0916 02:51:47.170628  108846 master.go:461] Enabling API group "events.k8s.io".
I0916 02:51:47.170673  108846 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0916 02:51:47.171296  108846 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.171909  108846 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.172568  108846 watch_cache.go:405] Replace watchCache (rev: 30596) 
I0916 02:51:47.173112  108846 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.173462  108846 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.173827  108846 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.174148  108846 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.174626  108846 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.174909  108846 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.175560  108846 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.175872  108846 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.177171  108846 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.177856  108846 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.179636  108846 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.180083  108846 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.181562  108846 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.182129  108846 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.183594  108846 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.184000  108846 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.185319  108846 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.185864  108846 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 02:51:47.186064  108846 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0916 02:51:47.187352  108846 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.187714  108846 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.188156  108846 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.189420  108846 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.190943  108846 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.192427  108846 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.192887  108846 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.194499  108846 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.195574  108846 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.195998  108846 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.197513  108846 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 02:51:47.197765  108846 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0916 02:51:47.199310  108846 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.199760  108846 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.200948  108846 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.201918  108846 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.203012  108846 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.204590  108846 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.205537  108846 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.206394  108846 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.207319  108846 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.208455  108846 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.209408  108846 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 02:51:47.209593  108846 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0916 02:51:47.210828  108846 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.211650  108846 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 02:51:47.211841  108846 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0916 02:51:47.212755  108846 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.213893  108846 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.214286  108846 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.215430  108846 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.216140  108846 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.216916  108846 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.218157  108846 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 02:51:47.218366  108846 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0916 02:51:47.219518  108846 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.221084  108846 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.221561  108846 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.222551  108846 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.223089  108846 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.223578  108846 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.224912  108846 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.225345  108846 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.225904  108846 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.227112  108846 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.228113  108846 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.228537  108846 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 02:51:47.228746  108846 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0916 02:51:47.228840  108846 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0916 02:51:47.229827  108846 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.230735  108846 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.232204  108846 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.233074  108846 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.233915  108846 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"37cd6cf3-b523-44e4-b507-388972e3ba84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 02:51:47.237584  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.237675  108846 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0916 02:51:47.237689  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.237704  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.237720  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.237729  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.237762  108846 httplog.go:90] GET /healthz: (272.834µs) 0 [Go-http-client/1.1 127.0.0.1:57410]
I0916 02:51:47.238741  108846 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.412985ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0916 02:51:47.244133  108846 httplog.go:90] GET /api/v1/services: (3.833342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0916 02:51:47.250023  108846 httplog.go:90] GET /api/v1/services: (1.056644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0916 02:51:47.252438  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.252547  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.252557  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.252567  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.252573  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.252597  108846 httplog.go:90] GET /healthz: (270.088µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0916 02:51:47.253785  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.407716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57410]
I0916 02:51:47.254735  108846 httplog.go:90] GET /api/v1/services: (1.047894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0916 02:51:47.255174  108846 httplog.go:90] GET /api/v1/services: (1.088705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.256087  108846 httplog.go:90] POST /api/v1/namespaces: (1.602676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57410]
I0916 02:51:47.257364  108846 httplog.go:90] GET /api/v1/namespaces/kube-public: (908.456µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.259244  108846 httplog.go:90] POST /api/v1/namespaces: (1.405474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.261006  108846 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.360295ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.262980  108846 httplog.go:90] POST /api/v1/namespaces: (1.342272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.338673  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.338706  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.338727  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.338733  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.338739  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.338772  108846 httplog.go:90] GET /healthz: (261.151µs) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:47.353432  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.353469  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.353482  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.353491  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.353500  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.353544  108846 httplog.go:90] GET /healthz: (268.456µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.438658  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.438693  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.438706  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.438715  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.438724  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.438767  108846 httplog.go:90] GET /healthz: (280.883µs) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:47.453677  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.453716  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.453729  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.453739  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.453747  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.453802  108846 httplog.go:90] GET /healthz: (567.401µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.538732  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.538789  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.538803  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.538815  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.538824  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.538882  108846 httplog.go:90] GET /healthz: (385.214µs) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:47.553480  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.553665  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.553730  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.553784  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.553821  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.553968  108846 httplog.go:90] GET /healthz: (748.097µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.638812  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.638990  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.639054  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.639099  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.639144  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.639374  108846 httplog.go:90] GET /healthz: (771.097µs) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:47.653403  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.653655  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.653752  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.653808  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.653838  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.653986  108846 httplog.go:90] GET /healthz: (752.441µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.738631  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.738662  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.738686  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.738697  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.738705  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.738739  108846 httplog.go:90] GET /healthz: (266.275µs) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:47.753237  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.753269  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.753282  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.753291  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.753300  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.753325  108846 httplog.go:90] GET /healthz: (222.215µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.838683  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.838721  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.838733  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.838744  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.838764  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.838793  108846 httplog.go:90] GET /healthz: (268.228µs) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:47.853322  108846 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 02:51:47.853371  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.853384  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.853393  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.853402  108846 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.853432  108846 httplog.go:90] GET /healthz: (250.959µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:47.932323  108846 client.go:361] parsed scheme: "endpoint"
I0916 02:51:47.932420  108846 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 02:51:47.939793  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.939822  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.939839  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.939848  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.939896  108846 httplog.go:90] GET /healthz: (1.457523ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:47.954446  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:47.954478  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:47.954490  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:47.954504  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:47.954558  108846 httplog.go:90] GET /healthz: (1.323762ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.039965  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.039994  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:48.040004  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:48.040012  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:48.040051  108846 httplog.go:90] GET /healthz: (1.590887ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:48.054759  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.054791  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:48.054801  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:48.054809  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:48.054849  108846 httplog.go:90] GET /healthz: (1.585929ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.139584  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.139635  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:48.139648  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:48.139658  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:48.139702  108846 httplog.go:90] GET /healthz: (1.233513ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:48.156396  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.156448  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:48.156458  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:48.156467  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:48.157038  108846 httplog.go:90] GET /healthz: (3.291971ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.239125  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.89028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.239702  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.879559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57426]
I0916 02:51:48.241557  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.52988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57426]
I0916 02:51:48.241774  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.241792  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:48.241802  108846 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 02:51:48.241809  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 02:51:48.241851  108846 httplog.go:90] GET /healthz: (2.661559ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:48.242064  108846 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.427855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.246386  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.798444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57426]
I0916 02:51:48.246579  108846 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (9.307645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0916 02:51:48.249928  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.799164ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57426]
I0916 02:51:48.250517  108846 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.113469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0916 02:51:48.250773  108846 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0916 02:51:48.252807  108846 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (10.010642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.254131  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (3.060242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57426]
I0916 02:51:48.254382  108846 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (3.054947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0916 02:51:48.257444  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.257466  108846 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 02:51:48.257476  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.257504  108846 httplog.go:90] GET /healthz: (4.339974ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.257666  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.707305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57426]
I0916 02:51:48.257803  108846 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.595607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0916 02:51:48.258020  108846 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0916 02:51:48.258036  108846 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0916 02:51:48.259291  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.265282ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.260701  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.038622ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.262336  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.134241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.263949  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.155014ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.266603  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.246436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.266827  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0916 02:51:48.277222  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (10.23001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.288865  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (10.776387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.289401  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0916 02:51:48.291289  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.635602ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.293910  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.03447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.294184  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0916 02:51:48.295304  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (960.494µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.298002  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.315223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.298227  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0916 02:51:48.299251  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (860.207µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.301837  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.234732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.302032  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0916 02:51:48.303425  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.228868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.305495  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.687145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.305744  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0916 02:51:48.306959  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.041491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.309216  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.9115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.309425  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0916 02:51:48.310798  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.23326ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.313274  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.047128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.313481  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0916 02:51:48.314898  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.231727ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.318635  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.256106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.319013  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0916 02:51:48.320747  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.333471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.323118  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.927416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.323623  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0916 02:51:48.325510  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.595068ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.327911  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.449621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.328214  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0916 02:51:48.329184  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (806.358µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.331869  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.292872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.332165  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0916 02:51:48.333479  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.152347ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.335476  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.635225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.335661  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0916 02:51:48.337036  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (961.65µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.339100  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.642386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.339767  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0916 02:51:48.340244  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.340412  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.340595  108846 httplog.go:90] GET /healthz: (1.481201ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:48.341447  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.500352ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.344155  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.704886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.344415  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0916 02:51:48.345714  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (950.243µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.347727  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.670254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.347977  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0916 02:51:48.349118  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (886.603µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.351583  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.002179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.351829  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0916 02:51:48.352997  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (939.404µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.354104  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.354228  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.354424  108846 httplog.go:90] GET /healthz: (1.156976ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.355445  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.766869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.355698  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0916 02:51:48.357343  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.028041ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.359232  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.479659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.359405  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0916 02:51:48.360418  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (823.051µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.362548  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.65176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.362840  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0916 02:51:48.363859  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (790.095µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.365950  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.764811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.366217  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0916 02:51:48.367365  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (940.033µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.369209  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.437651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.369387  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0916 02:51:48.370463  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (827.777µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.372714  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.579737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.372900  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0916 02:51:48.374152  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.065913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.376129  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.552193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.376421  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0916 02:51:48.377364  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (715.409µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.379017  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.15325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.379264  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0916 02:51:48.380189  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (741.478µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.382188  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.58667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.382645  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0916 02:51:48.383958  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (999.418µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.385997  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.452624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.386250  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0916 02:51:48.387288  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (790.9µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.389423  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.389684  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0916 02:51:48.390870  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (889.744µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.392979  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.631633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.393242  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0916 02:51:48.395047  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.570437ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.397326  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.794906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.397604  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0916 02:51:48.398800  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (845.411µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.401079  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.865292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.401531  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0916 02:51:48.402644  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (831.313µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.405262  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.194126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.405800  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0916 02:51:48.406798  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (790.663µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.408709  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.542632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.408941  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0916 02:51:48.410019  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (876.83µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.411822  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.388586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.412028  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0916 02:51:48.413192  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (973.879µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.415502  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.705414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.415707  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0916 02:51:48.416724  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (786.643µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.418769  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.669248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.419055  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0916 02:51:48.420162  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (919.296µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.422453  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.655552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.422769  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0916 02:51:48.424080  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.007243ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.425989  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.426238  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0916 02:51:48.427345  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (866.407µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.429478  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.43874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.429779  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0916 02:51:48.431082  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.131564ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.433287  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.829218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.433538  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0916 02:51:48.434716  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (791.868µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.436908  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.714444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.437154  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0916 02:51:48.438332  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (956.646µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.439234  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.439263  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.439324  108846 httplog.go:90] GET /healthz: (924.94µs) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:48.440515  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.758205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.440759  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0916 02:51:48.442028  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.058628ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.444852  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.429766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.445139  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0916 02:51:48.446200  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (872.811µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.448343  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.614879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.448593  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0916 02:51:48.449693  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (860.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.451823  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.625427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.452092  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0916 02:51:48.453936  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.635819ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.454695  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.454716  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.454745  108846 httplog.go:90] GET /healthz: (1.639291ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.456740  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.231638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.456972  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0916 02:51:48.457949  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (806.896µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.460194  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.68682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.460400  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0916 02:51:48.461579  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (951.531µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.464567  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.183172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.464926  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0916 02:51:48.466307  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.095705ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.468637  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.782299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.468994  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0916 02:51:48.470208  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.020198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.472427  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.473056  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0916 02:51:48.474393  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (977.02µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.477525  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.526195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.477875  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0916 02:51:48.478984  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (804.264µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.500015  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.544315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.500375  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0916 02:51:48.519014  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.53917ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.541290  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.896584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.541600  108846 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0916 02:51:48.542421  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.542540  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.542791  108846 httplog.go:90] GET /healthz: (4.356842ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:48.554476  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.554514  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.554562  108846 httplog.go:90] GET /healthz: (1.343222ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.558655  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.244107ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.579862  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.492549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.580121  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0916 02:51:48.599563  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.176838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.620003  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.547059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.620802  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0916 02:51:48.638935  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.512981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.639781  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.639807  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.639843  108846 httplog.go:90] GET /healthz: (1.41182ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:48.654310  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.654346  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.654392  108846 httplog.go:90] GET /healthz: (1.137246ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.659939  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.554203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.660330  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0916 02:51:48.678863  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.426962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.699746  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.327699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.700463  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0916 02:51:48.719731  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (2.278655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.740994  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.741030  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.741078  108846 httplog.go:90] GET /healthz: (2.634864ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:48.741590  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.081888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.741917  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0916 02:51:48.754235  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.754269  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.754308  108846 httplog.go:90] GET /healthz: (1.134756ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.758797  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.460374ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.779952  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.434635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.780468  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0916 02:51:48.798801  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.392439ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.819916  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.511225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.820312  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0916 02:51:48.839938  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.839974  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.840009  108846 httplog.go:90] GET /healthz: (1.526351ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:48.840696  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (3.290198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.855197  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.855228  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.855273  108846 httplog.go:90] GET /healthz: (2.150535ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.859582  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.106491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.859832  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0916 02:51:48.878730  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.399913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.899859  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.384309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.900182  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0916 02:51:48.918986  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.569947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.940294  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.893286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:48.940702  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.940726  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.940771  108846 httplog.go:90] GET /healthz: (2.32367ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:48.940877  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0916 02:51:48.954430  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:48.954469  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:48.954516  108846 httplog.go:90] GET /healthz: (1.239135ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.959342  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.984549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.980153  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.657921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:48.980639  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0916 02:51:48.998798  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.436657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.019466  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.081856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.019781  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0916 02:51:49.038815  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.440541ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.039284  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.039328  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.039364  108846 httplog.go:90] GET /healthz: (944.537µs) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:49.054401  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.054437  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.054480  108846 httplog.go:90] GET /healthz: (1.292155ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.059151  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.80826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.059726  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0916 02:51:49.078773  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.410643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.100543  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.133563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.100941  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0916 02:51:49.118775  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.435654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.139697  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.139728  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.139809  108846 httplog.go:90] GET /healthz: (1.358027ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:49.140057  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.671846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.140818  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0916 02:51:49.154749  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.154788  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.154827  108846 httplog.go:90] GET /healthz: (1.586866ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.158950  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.644958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.179968  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.613452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.180239  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0916 02:51:49.199116  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.735566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.220134  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.676973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.220441  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0916 02:51:49.239319  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.239354  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.239391  108846 httplog.go:90] GET /healthz: (999.718µs) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:49.239568  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.101723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.254581  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.254642  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.254719  108846 httplog.go:90] GET /healthz: (1.344286ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.259812  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.298761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.260213  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0916 02:51:49.278793  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.405102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.300198  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.775133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.300453  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0916 02:51:49.318871  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.428603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.341841  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.341874  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.341912  108846 httplog.go:90] GET /healthz: (1.775139ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:49.342251  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.868355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.342738  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0916 02:51:49.355002  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.355039  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.355076  108846 httplog.go:90] GET /healthz: (1.853256ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.359334  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.268506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.380684  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.208845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.381054  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0916 02:51:49.398917  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.48912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.419709  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.310154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.420013  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0916 02:51:49.440181  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.617126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.440820  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.440879  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.440946  108846 httplog.go:90] GET /healthz: (2.551972ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:49.454572  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.454697  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.454778  108846 httplog.go:90] GET /healthz: (1.448705ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.459849  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.45081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.460111  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0916 02:51:49.479314  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.818205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.500007  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.546479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.500392  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0916 02:51:49.518983  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.521773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.539821  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.539859  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.539913  108846 httplog.go:90] GET /healthz: (1.482102ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:49.541025  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.553154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.541509  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0916 02:51:49.554370  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.554407  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.554444  108846 httplog.go:90] GET /healthz: (1.166886ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.558653  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.295331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.579934  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.508351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.580179  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0916 02:51:49.599226  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.777327ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.619923  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.545593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.620178  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0916 02:51:49.641002  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.513486ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.642192  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.642228  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.642265  108846 httplog.go:90] GET /healthz: (2.71128ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:49.654375  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.654407  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.654694  108846 httplog.go:90] GET /healthz: (1.424955ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.660176  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.767611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.660442  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0916 02:51:49.678686  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.252368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.699927  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.514934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.700269  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
E0916 02:51:49.704587  108846 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:33445/apis/events.k8s.io/v1beta1/namespaces/permit-pluginac85e662-9008-41f2-83db-7022190d9218/events: dial tcp 127.0.0.1:33445: connect: connection refused' (may retry after sleeping)
I0916 02:51:49.719048  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.567037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.739351  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.739391  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.739434  108846 httplog.go:90] GET /healthz: (1.070456ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:49.741164  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.74075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.741381  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0916 02:51:49.754443  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.754665  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.754833  108846 httplog.go:90] GET /healthz: (1.567242ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.760385  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.3772ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.779893  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.449808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.780325  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0916 02:51:49.798744  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.397526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.820026  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.625855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.820572  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0916 02:51:49.838892  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.556751ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.839893  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.840415  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.840452  108846 httplog.go:90] GET /healthz: (1.765669ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:49.854682  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.854725  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.854770  108846 httplog.go:90] GET /healthz: (1.550723ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.859901  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.470813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.860165  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0916 02:51:49.878905  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.568568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.900348  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.808264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.900643  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0916 02:51:49.919089  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.609248ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.939858  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.410171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:49.940289  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0916 02:51:49.940595  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.940763  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.941076  108846 httplog.go:90] GET /healthz: (2.65532ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:49.954756  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:49.955026  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:49.955240  108846 httplog.go:90] GET /healthz: (2.011029ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.958740  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.318177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.980018  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.630302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:49.980291  108846 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0916 02:51:49.999157  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.738292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.001669  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.613954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.020043  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.644517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.020745  108846 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0916 02:51:50.039272  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.835548ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.041315  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.041515  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.041789  108846 httplog.go:90] GET /healthz: (1.761337ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:50.043279  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.19237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.054593  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.054654  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.054699  108846 httplog.go:90] GET /healthz: (1.508842ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.060388  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.004848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.060864  108846 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0916 02:51:50.079067  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.501379ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.081560  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.33488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.100164  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.498905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.100631  108846 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0916 02:51:50.119379  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.949562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.122259  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.839904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.139943  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.139974  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.140016  108846 httplog.go:90] GET /healthz: (1.344894ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:50.142439  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.993094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.143483  108846 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0916 02:51:50.154650  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.154693  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.154750  108846 httplog.go:90] GET /healthz: (1.451586ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.158756  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.421364ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.160703  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.541365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.180712  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.253932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.181253  108846 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0916 02:51:50.199834  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.451526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.202370  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.756214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.220575  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.03976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.221138  108846 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0916 02:51:50.239571  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.164319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.239742  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.239826  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.239898  108846 httplog.go:90] GET /healthz: (1.491772ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:50.241913  108846 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.505576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.254267  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.254300  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.254363  108846 httplog.go:90] GET /healthz: (1.012087ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.260336  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.534414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.261000  108846 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0916 02:51:50.278942  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.454267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.280963  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.409274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.300298  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.855553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.300867  108846 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0916 02:51:50.319139  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.742467ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.321969  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.984262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.340068  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.592316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.340329  108846 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0916 02:51:50.340632  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.340744  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.341120  108846 httplog.go:90] GET /healthz: (2.556066ms) 0 [Go-http-client/1.1 127.0.0.1:57428]
I0916 02:51:50.355560  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.355779  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.355955  108846 httplog.go:90] GET /healthz: (1.651583ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.359927  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (2.487076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.362365  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.875646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.379971  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.617272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.380236  108846 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0916 02:51:50.399050  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.678852ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.401702  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.165828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.419955  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.530405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.420225  108846 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0916 02:51:50.439020  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.60002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.439511  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.439544  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.439625  108846 httplog.go:90] GET /healthz: (1.196659ms) 0 [Go-http-client/1.1 127.0.0.1:57412]
I0916 02:51:50.440916  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.188513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.454325  108846 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 02:51:50.454366  108846 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 02:51:50.454404  108846 httplog.go:90] GET /healthz: (1.165408ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.459658  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.323953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.459877  108846 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0916 02:51:50.478954  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.426793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.480753  108846 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.34544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.500262  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.799695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.500698  108846 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0916 02:51:50.519163  108846 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.783334ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.520966  108846 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.326322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.540675  108846 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.151039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.541039  108846 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0916 02:51:50.543701  108846 httplog.go:90] GET /healthz: (1.412865ms) 200 [Go-http-client/1.1 127.0.0.1:57428]
W0916 02:51:50.544480  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544528  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544547  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544586  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544599  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544624  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544640  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544661  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544672  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544681  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 02:51:50.544721  108846 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0916 02:51:50.544744  108846 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0916 02:51:50.544754  108846 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0916 02:51:50.544947  108846 shared_informer.go:197] Waiting for caches to sync for scheduler
I0916 02:51:50.545300  108846 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0916 02:51:50.545330  108846 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0916 02:51:50.546410  108846 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (741.183µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:51:50.547292  108846 get.go:251] Starting watch for /api/v1/pods, rv=30560 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=8m26s
I0916 02:51:50.554373  108846 httplog.go:90] GET /healthz: (1.164353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.556077  108846 httplog.go:90] GET /api/v1/namespaces/default: (1.167294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.558343  108846 httplog.go:90] POST /api/v1/namespaces: (1.687562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.560123  108846 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.240852ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.564275  108846 httplog.go:90] POST /api/v1/namespaces/default/services: (3.422621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.565705  108846 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (894.697µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.567821  108846 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.667229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.645180  108846 shared_informer.go:227] caches populated
I0916 02:51:50.645278  108846 shared_informer.go:204] Caches are synced for scheduler 
I0916 02:51:50.645744  108846 reflector.go:120] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.645775  108846 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.645799  108846 reflector.go:120] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.645821  108846 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.646312  108846 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.646332  108846 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.646658  108846 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.646680  108846 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.646708  108846 reflector.go:120] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.646730  108846 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.646764  108846 reflector.go:120] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.646781  108846 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.647236  108846 reflector.go:120] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.647255  108846 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.647296  108846 reflector.go:120] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.647312  108846 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.647338  108846 reflector.go:120] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.647354  108846 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.647690  108846 reflector.go:120] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.647707  108846 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0916 02:51:50.649759  108846 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (773.025µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0916 02:51:50.649759  108846 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (483.982µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57624]
I0916 02:51:50.650208  108846 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (370.971µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57626]
I0916 02:51:50.650248  108846 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (331.389µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57616]
I0916 02:51:50.650746  108846 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (426.308µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57628]
I0916 02:51:50.650776  108846 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (399.713µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57618]
I0916 02:51:50.651263  108846 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (355.046µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57620]
I0916 02:51:50.651341  108846 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (472.101µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57630]
I0916 02:51:50.651536  108846 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30565 labels= fields= timeout=9m40s
I0916 02:51:50.651814  108846 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (384.764µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57624]
I0916 02:51:50.652464  108846 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30591 labels= fields= timeout=6m0s
I0916 02:51:50.652510  108846 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30574 labels= fields= timeout=8m6s
I0916 02:51:50.652539  108846 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30570 labels= fields= timeout=6m12s
I0916 02:51:50.653220  108846 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30562 labels= fields= timeout=5m34s
I0916 02:51:50.653541  108846 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (4.738453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:51:50.654037  108846 get.go:251] Starting watch for /api/v1/nodes, rv=30560 labels= fields= timeout=7m4s
I0916 02:51:50.654336  108846 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30560 labels= fields= timeout=6m16s
I0916 02:51:50.654336  108846 get.go:251] Starting watch for /api/v1/services, rv=30728 labels= fields= timeout=6m2s
I0916 02:51:50.654581  108846 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30570 labels= fields= timeout=5m56s
I0916 02:51:50.655104  108846 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30560 labels= fields= timeout=7m41s
I0916 02:51:50.745737  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745777  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745784  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745790  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745797  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745803  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745808  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745813  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745819  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745829  108846 shared_informer.go:227] caches populated
I0916 02:51:50.745838  108846 shared_informer.go:227] caches populated
I0916 02:51:50.749120  108846 httplog.go:90] POST /api/v1/nodes: (2.767055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:50.749545  108846 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0916 02:51:50.753415  108846 httplog.go:90] PUT /api/v1/nodes/testnode/status: (3.424742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:50.756828  108846 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods: (2.857866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:50.757338  108846 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pidpressure-fake-name
I0916 02:51:50.757506  108846 scheduler.go:530] Attempting to schedule pod: node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pidpressure-fake-name
I0916 02:51:50.757961  108846 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pidpressure-fake-name", node "testnode"
I0916 02:51:50.758086  108846 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0916 02:51:50.758288  108846 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0916 02:51:50.760845  108846 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name/binding: (2.059185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:50.761033  108846 scheduler.go:662] pod node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0916 02:51:50.763314  108846 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/events: (2.029106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:50.860266  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.320157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:50.960430  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.179727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.060480  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.257908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.160389  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.83098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.259794  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.022741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.359884  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.195533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.459673  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.149847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.559509  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.006108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.651975  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:51.652355  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:51.653436  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:51.653924  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:51.653934  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:51.654237  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:51.659781  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.219686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.759516  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.938021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.859627  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.041944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:51.960693  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (3.164617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:52.059956  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.43409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:52.159771  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.218295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
E0916 02:51:52.236869  108846 factory.go:590] Error getting pod permit-pluginac85e662-9008-41f2-83db-7022190d9218/test-pod for retry: Get http://127.0.0.1:33445/api/v1/namespaces/permit-pluginac85e662-9008-41f2-83db-7022190d9218/pods/test-pod: dial tcp 127.0.0.1:33445: connect: connection refused; retrying...
I0916 02:51:52.259757  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.227991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:52.360475  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.733391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:52.459530  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.866189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:52.559482  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.865955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:52.652189  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:52.652500  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:52.653645  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:52.654029  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:52.654077  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:52.654390  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:52.659309  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.759334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:52.759649  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.078485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:52.860056  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.071935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:52.959598  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.099052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.059389  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.844517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.159665  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.050833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.261657  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.558752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.363854  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (5.407756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.463208  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.420656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.559333  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.774227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.652353  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:53.652634  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:53.653826  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:53.654157  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:53.654239  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:53.654493  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:53.659084  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.596311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.759526  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.940401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.860532  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.774952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:53.959347  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.860746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.060494  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.950688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.159365  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.859985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.259412  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.835327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.361150  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (3.106547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.459317  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.825678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.562189  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.103191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.652543  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:54.652781  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:54.653978  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:54.654324  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:54.654335  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:54.654674  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:54.659317  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.832855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.759461  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.70055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.860668  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (3.094525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:54.959560  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.842167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.059361  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.828472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.159441  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.899302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.259709  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.05069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.359445  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.850913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.460082  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.314531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.559480  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.898412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.652714  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:55.652918  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:55.654150  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:55.654483  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:55.654488  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:55.654798  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:55.659687  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.109377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.759223  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.675895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.859415  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.836298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:55.959424  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.920363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.060101  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.559436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.160229  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.866111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.259255  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.762817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.359355  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.8256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.459345  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.753172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.559195  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.644353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.652857  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:56.653075  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:56.654309  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:56.654657  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:56.654675  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:56.654954  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:56.659557  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.015855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.759594  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.922534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.859316  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.778581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:56.959161  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.570761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.059624  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.115284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.159638  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.0991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.259350  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.841939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.359581  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.058728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.459305  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.778816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.559334  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.769244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.653302  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:57.653362  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:57.654485  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:57.654913  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:57.655008  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:57.655048  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:57.659414  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.743664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.759475  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.891533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.859599  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.938539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:57.960281  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.744409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.059475  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.934792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.159246  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.73121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.259339  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.736757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.359266  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.693147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.459458  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.900406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.559507  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.94607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.653446  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:58.653504  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:58.654719  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:58.655093  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:58.655097  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:58.655197  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:58.659885  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.260416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.760147  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.549623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.861790  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (4.263538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:58.959100  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.544325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.059680  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.094103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.160010  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.053067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.259296  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.727243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.359781  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.130958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.459539  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.982648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.559212  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.729589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.653589  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:59.653675  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:59.654892  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:59.655267  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:59.655387  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:59.655424  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:51:59.659319  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.740239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.764868  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.980294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.859638  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.003424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:51:59.961989  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (3.063591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.059099  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.596038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.159496  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.878035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.259219  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.707456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.359243  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.66422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.459438  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.819967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.556725  108846 httplog.go:90] GET /api/v1/namespaces/default: (1.685094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.558628  108846 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.402895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.559725  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.937297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:00.560284  108846 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.260144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.653760  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:00.653806  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:00.655116  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:00.655465  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:00.655554  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:00.655651  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:00.659338  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.791743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.759255  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.683056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:00.860230  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.68648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
E0916 02:52:00.925814  108846 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:33445/apis/events.k8s.io/v1beta1/namespaces/permit-pluginac85e662-9008-41f2-83db-7022190d9218/events: dial tcp 127.0.0.1:33445: connect: connection refused' (may retry after sleeping)
I0916 02:52:00.960718  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (3.183283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.059009  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.540993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.161835  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (4.015855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.261052  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (3.328486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.359484  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.928854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.459491  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.921945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.559303  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.829318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.653929  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:01.653929  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:01.655302  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:01.655648  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:01.655678  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:01.655760  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:01.661620  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (4.101558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.759408  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.842798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.860357  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.313519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:01.959165  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.644479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.059272  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.703617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.159388  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.84338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.265319  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (4.343656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.358961  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.529391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.460495  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.469971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.559174  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.620287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.654490  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:02.654527  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:02.655462  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:02.655787  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:02.655884  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:02.655955  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:02.658954  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.45746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.759391  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.890253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.859468  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.95299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:02.959523  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.837376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.059704  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.982391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.159187  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.657435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.259365  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.543363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.359354  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.80741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.459797  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.248345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.559365  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.757622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.654679  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:03.654682  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:03.655651  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:03.655989  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:03.655997  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:03.656249  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:03.659171  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.635011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.759911  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.995599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.866334  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (8.595966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:03.964441  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (6.895432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.059087  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.585623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.159769  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.167487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.259340  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.78487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.359368  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.618503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.459245  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.740027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.559260  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.488089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.654862  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:04.654874  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:04.655820  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:04.656160  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:04.656170  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:04.656469  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:04.659444  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.871787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.759047  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.53316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.859757  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.237956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:04.959453  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.896627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.059828  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.293847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.159439  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.931925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.259220  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.404772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.359079  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.539566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.459258  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.725467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.559099  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.572162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.655101  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:05.655152  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:05.656001  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:05.656338  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:05.656343  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:05.656646  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:05.659247  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.661249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.759409  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.88521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.864827  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (7.210817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:05.959513  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.046196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.059345  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.783732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.159161  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.586094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.259671  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.137194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.359106  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.597244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.459099  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.5246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.559396  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.801188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.655266  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:06.655330  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:06.656174  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:06.656515  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:06.656647  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:06.656821  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:06.659145  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.669293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.759450  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.773593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.859190  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.682765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:06.959445  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.878378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.059314  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.809202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.159561  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.947502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.259754  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.023616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.359098  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.504718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.459099  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.532934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.559300  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.758292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.655462  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:07.655479  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:07.656373  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:07.656684  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:07.656795  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:07.656981  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:07.659127  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.591078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.759284  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.727268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.860129  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.584642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:07.959441  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.853977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.059777  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.976137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.159716  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.994289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.259591  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.042074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.359156  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.586633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.458997  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.437211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.559420  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.826099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.655642  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:08.655953  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:08.656955  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:08.657201  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:08.657509  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:08.657578  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:08.659382  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.847148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.759276  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.744957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.860040  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.464812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:08.967968  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.728966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.059646  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.078076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.159451  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.768887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.260022  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.366511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.359519  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.777305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.460045  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.514484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.559228  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.676203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.655840  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:09.656130  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:09.657130  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:09.657362  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:09.658030  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:09.658331  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:09.660453  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.957941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.759286  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.69442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.859403  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.722044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:09.958915  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.384732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:10.059903  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.372678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:10.159238  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.669991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:10.259469  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.87719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:10.359427  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.686263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:10.459344  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.738257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:10.557135  108846 httplog.go:90] GET /api/v1/namespaces/default: (1.912066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:10.558841  108846 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.257452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:10.559275  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.713723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:10.560333  108846 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.038931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:10.655967  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:10.656262  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:10.657296  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:10.657516  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:10.658368  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:10.658445  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:10.659527  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.749599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:10.759397  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.849717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:10.859264  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.702005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:10.960235  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.729282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.059247  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.667124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.159791  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.114952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.259274  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.723995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.363254  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (5.660393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.461488  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.919446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.559013  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.506643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.656169  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:11.656376  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:11.657447  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:11.657702  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:11.658522  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:11.658522  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:11.659374  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.766983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.760494  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.902916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.861577  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (4.062672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:11.959899  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.131593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
E0916 02:52:11.970897  108846 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:33445/apis/events.k8s.io/v1beta1/namespaces/permit-pluginac85e662-9008-41f2-83db-7022190d9218/events: dial tcp 127.0.0.1:33445: connect: connection refused' (may retry after sleeping)
I0916 02:52:12.060532  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.456748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:12.161029  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (3.118844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:12.259386  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.816047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:12.359269  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.762382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:12.459362  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.785261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:12.558991  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.479517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:12.656352  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:12.656594  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:12.657592  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:12.657858  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:12.659311  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.782156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:12.659448  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:12.659513  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:12.759359  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.82059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:12.859238  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.730509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:12.958977  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.432141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.059111  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.610545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.159000  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.515335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.258956  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.486153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.359480  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.387295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.459589  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.696934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.559169  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.657741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.656539  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:13.656797  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:13.657774  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:13.657993  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:13.659399  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.862036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.659593  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:13.659642  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:13.759008  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.507588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.859479  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.985669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:13.959424  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.795215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.059248  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.675401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.159524  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.96256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.260407  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.738626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.359131  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.687681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.459533  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.908689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.559321  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.79171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.656718  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:14.657058  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:14.658554  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:14.658557  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:14.659510  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.651201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.659789  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:14.659823  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:14.758859  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.337681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.859176  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.716438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:14.959233  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.6905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.059373  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.867236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.160277  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.69731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.259201  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.621734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.359297  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.823473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.459294  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.724312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.559190  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.644337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.656939  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:15.657218  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:15.659072  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:15.659249  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:15.659942  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:15.660072  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:15.660277  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.763767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.759436  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.838209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.859417  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.838405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:15.959454  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.920159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.059991  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.480863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.159253  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.738298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.259562  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.006592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.359555  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.96571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.459469  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.916827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.559195  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.610829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.657072  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:16.657363  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:16.659238  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:16.659448  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:16.659522  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.127747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.660105  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:16.660213  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:16.759584  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.018514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.859652  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.824986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:16.959377  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.778355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:17.059591  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.64099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:17.159792  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.294584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:17.259437  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.850522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:17.359389  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.815989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:17.461593  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (4.054915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:17.560792  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (3.032254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:17.657249  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:17.658780  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:17.659460  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.907672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:17.659827  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:17.659914  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:17.660269  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:17.660298  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:17.759140  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.629891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
E0916 02:52:17.837356  108846 factory.go:590] Error getting pod permit-pluginac85e662-9008-41f2-83db-7022190d9218/test-pod for retry: Get http://127.0.0.1:33445/api/v1/namespaces/permit-pluginac85e662-9008-41f2-83db-7022190d9218/pods/test-pod: dial tcp 127.0.0.1:33445: connect: connection refused; retrying...
I0916 02:52:17.859245  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.727194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:17.959713  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.131023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.059665  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.084535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.159277  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.741281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.259428  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.86851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.365396  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (5.060163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.462184  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.781798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.560469  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.952149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.657450  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:18.658923  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:18.659674  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.093083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.660022  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:18.660106  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:18.660423  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:18.660454  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:18.759314  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.762948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.859569  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.017833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:18.959423  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.817245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.059681  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.073346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.159919  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.247977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.259333  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.666424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.359219  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.628555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.460634  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.494427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.558941  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.406039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.658051  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:19.659236  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.707687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.659366  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:19.660185  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:19.660198  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:19.660554  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:19.660578  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:19.759388  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.814876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.860145  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.662165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:19.959024  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.572251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.059313  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.724268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.159428  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.800591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.259166  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.674033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.359479  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.831446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.460485  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.866487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.559178  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.749958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0916 02:52:20.559834  108846 httplog.go:90] GET /api/v1/namespaces/default: (2.463658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.561450  108846 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.093192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.563133  108846 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.209009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.658295  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:20.659597  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (2.038481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.659777  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:20.660362  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:20.660384  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:20.660667  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:20.660738  108846 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 02:52:20.759210  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.6356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.761299  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.495513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.766061  108846 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (4.252464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.768636  108846 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pods/pidpressure-fake-name: (1.050821ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
E0916 02:52:20.769524  108846 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0916 02:52:20.769694  108846 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30562&timeout=5m34s&timeoutSeconds=334&watch=true: (30.116970345s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57634]
I0916 02:52:20.769733  108846 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30560&timeout=6m16s&timeoutSeconds=376&watch=true: (30.115661278s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57624]
I0916 02:52:20.769862  108846 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30570&timeout=6m12s&timeoutSeconds=372&watch=true: (30.117537916s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57616]
I0916 02:52:20.769945  108846 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30560&timeout=7m41s&timeoutSeconds=461&watch=true: (30.117183568s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57618]
I0916 02:52:20.769969  108846 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30728&timeout=6m2s&timeoutSeconds=362&watch=true: (30.116202983s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57630]
I0916 02:52:20.770007  108846 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30560&timeout=7m4s&timeoutSeconds=424&watch=true: (30.116260052s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57620]
I0916 02:52:20.769947  108846 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30570&timeout=5m56s&timeoutSeconds=356&watch=true: (30.115568443s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0916 02:52:20.770049  108846 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30560&timeoutSeconds=506&watch=true: (30.223040292s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0916 02:52:20.770203  108846 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30574&timeout=8m6s&timeoutSeconds=486&watch=true: (30.11789362s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57628]
I0916 02:52:20.770382  108846 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30591&timeout=6m0s&timeoutSeconds=360&watch=true: (30.118367217s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57636]
I0916 02:52:20.770712  108846 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30565&timeout=9m40s&timeoutSeconds=580&watch=true: (30.119357568s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0916 02:52:20.774260  108846 httplog.go:90] DELETE /api/v1/nodes: (5.217117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.774587  108846 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0916 02:52:20.776562  108846 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.22024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
I0916 02:52:20.778736  108846 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.718039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58118]
--- FAIL: TestNodePIDPressure (33.85s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190916-024417.xml

Find node-pid-pressure152b94e1-fbda-402d-a0b3-f640532b8f92/pidpressure-fake-name mentions in log files | View test history on testgrid


Show 2862 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 877 lines ...
W0916 02:39:19.352] I0916 02:39:19.340003   52852 shared_informer.go:197] Waiting for caches to sync for daemon sets
W0916 02:39:19.352] I0916 02:39:19.340628   52852 controllermanager.go:534] Started "podgc"
W0916 02:39:19.353] W0916 02:39:19.340923   52852 controllermanager.go:526] Skipping "nodeipam"
W0916 02:39:19.353] I0916 02:39:19.340721   52852 gc_controller.go:75] Starting GC controller
W0916 02:39:19.353] I0916 02:39:19.341298   52852 shared_informer.go:197] Waiting for caches to sync for GC
W0916 02:39:19.353] I0916 02:39:19.341996   52852 node_lifecycle_controller.go:77] Sending events to api server
W0916 02:39:19.353] E0916 02:39:19.342325   52852 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0916 02:39:19.353] W0916 02:39:19.342570   52852 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0916 02:39:19.354] I0916 02:39:19.343757   52852 controllermanager.go:534] Started "serviceaccount"
W0916 02:39:19.354] I0916 02:39:19.343885   52852 serviceaccounts_controller.go:116] Starting service account controller
W0916 02:39:19.354] I0916 02:39:19.344205   52852 shared_informer.go:197] Waiting for caches to sync for service account
W0916 02:39:19.557] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
I0916 02:39:19.658] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
... skipping 20 lines ...
W0916 02:39:20.047] I0916 02:39:19.752574   52852 controllermanager.go:534] Started "csrcleaner"
W0916 02:39:20.047] I0916 02:39:19.751457   52852 garbagecollector.go:130] Starting garbage collector controller
W0916 02:39:20.047] I0916 02:39:19.752656   52852 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0916 02:39:20.048] I0916 02:39:19.752702   52852 graph_builder.go:282] GraphBuilder running
W0916 02:39:20.048] I0916 02:39:19.752951   52852 cleaner.go:81] Starting CSR cleaner controller
W0916 02:39:20.048] I0916 02:39:19.755108   52852 controllermanager.go:534] Started "ttl"
W0916 02:39:20.048] E0916 02:39:19.756356   52852 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0916 02:39:20.048] W0916 02:39:19.756650   52852 controllermanager.go:526] Skipping "service"
W0916 02:39:20.048] I0916 02:39:19.756866   52852 core.go:211] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0916 02:39:20.048] W0916 02:39:19.757023   52852 controllermanager.go:526] Skipping "route"
W0916 02:39:20.049] I0916 02:39:19.757258   52852 ttl_controller.go:116] Starting TTL controller
W0916 02:39:20.049] I0916 02:39:19.758053   52852 shared_informer.go:197] Waiting for caches to sync for TTL
W0916 02:39:20.049] I0916 02:39:19.758697   52852 controllermanager.go:534] Started "persistentvolume-binder"
... skipping 6 lines ...
W0916 02:39:20.050] W0916 02:39:19.761886   52852 controllermanager.go:513] "endpointslice" is disabled
W0916 02:39:20.050] I0916 02:39:19.761811   52852 certificate_controller.go:118] Starting certificate controller "csrapproving"
W0916 02:39:20.050] I0916 02:39:19.762291   52852 controllermanager.go:534] Started "pvc-protection"
W0916 02:39:20.051] I0916 02:39:19.762266   52852 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving
W0916 02:39:20.051] I0916 02:39:19.763935   52852 pvc_protection_controller.go:100] Starting PVC protection controller
W0916 02:39:20.051] I0916 02:39:19.763969   52852 shared_informer.go:197] Waiting for caches to sync for PVC protection
W0916 02:39:20.051] W0916 02:39:19.791576   52852 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0916 02:39:20.051] I0916 02:39:19.820020   52852 shared_informer.go:204] Caches are synced for deployment 
W0916 02:39:20.052] I0916 02:39:19.820743   52852 shared_informer.go:204] Caches are synced for ReplicaSet 
W0916 02:39:20.052] I0916 02:39:19.822277   52852 shared_informer.go:204] Caches are synced for attach detach 
W0916 02:39:20.052] I0916 02:39:19.823170   52852 shared_informer.go:204] Caches are synced for expand 
W0916 02:39:20.052] I0916 02:39:19.826656   52852 shared_informer.go:204] Caches are synced for HPA 
W0916 02:39:20.052] I0916 02:39:19.828208   52852 shared_informer.go:204] Caches are synced for PV protection 
... skipping 8 lines ...
W0916 02:39:20.054] I0916 02:39:19.841922   52852 shared_informer.go:204] Caches are synced for GC 
W0916 02:39:20.054] I0916 02:39:19.858396   52852 shared_informer.go:204] Caches are synced for TTL 
W0916 02:39:20.054] I0916 02:39:19.860171   52852 shared_informer.go:204] Caches are synced for persistent volume 
W0916 02:39:20.055] I0916 02:39:19.863500   52852 shared_informer.go:204] Caches are synced for certificate-csrapproving 
W0916 02:39:20.055] I0916 02:39:19.864502   52852 shared_informer.go:204] Caches are synced for PVC protection 
W0916 02:39:20.055] I0916 02:39:19.923652   52852 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0916 02:39:20.055] E0916 02:39:19.932743   52852 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0916 02:39:20.055] E0916 02:39:19.935216   52852 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0916 02:39:20.129] I0916 02:39:20.128506   52852 shared_informer.go:204] Caches are synced for endpoint 
I0916 02:39:20.229] Successful: the flag '--client' shows correct client info
I0916 02:39:20.230] (BSuccessful: the flag '--client' correctly has no server version info
I0916 02:39:20.230] (B+++ [0916 02:39:20] Testing kubectl version: verify json output
I0916 02:39:20.237] Successful: --output json has correct client info
I0916 02:39:20.242] (BSuccessful: --output json has correct server info
... skipping 63 lines ...
I0916 02:39:23.251] +++ working dir: /go/src/k8s.io/kubernetes
I0916 02:39:23.253] +++ command: run_RESTMapper_evaluation_tests
I0916 02:39:23.264] +++ [0916 02:39:23] Creating namespace namespace-1568601563-10483
I0916 02:39:23.333] namespace/namespace-1568601563-10483 created
I0916 02:39:23.400] Context "test" modified.
I0916 02:39:23.407] +++ [0916 02:39:23] Testing RESTMapper
I0916 02:39:23.504] +++ [0916 02:39:23] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0916 02:39:23.518] +++ exit code: 0
I0916 02:39:23.634] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0916 02:39:23.635] bindings                                                                      true         Binding
I0916 02:39:23.635] componentstatuses                 cs                                          false        ComponentStatus
I0916 02:39:23.635] configmaps                        cm                                          true         ConfigMap
I0916 02:39:23.635] endpoints                         ep                                          true         Endpoints
... skipping 616 lines ...
I0916 02:39:41.998] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0916 02:39:42.085] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0916 02:39:42.154] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0916 02:39:42.240] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0916 02:39:42.390] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:39:42.563] (Bpod/env-test-pod created
W0916 02:39:42.664] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0916 02:39:42.665] error: setting 'all' parameter but found a non empty selector. 
W0916 02:39:42.665] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 02:39:42.665] I0916 02:39:41.675432   49305 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0916 02:39:42.666] error: min-available and max-unavailable cannot be both specified
I0916 02:39:42.766] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0916 02:39:42.767] Name:         env-test-pod
I0916 02:39:42.767] Namespace:    test-kubectl-describe-pod
I0916 02:39:42.767] Priority:     0
I0916 02:39:42.767] Node:         <none>
I0916 02:39:42.767] Labels:       <none>
... skipping 174 lines ...
I0916 02:39:55.795] (Bpod/valid-pod patched
I0916 02:39:55.890] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0916 02:39:55.960] (Bpod/valid-pod patched
I0916 02:39:56.047] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0916 02:39:56.195] (Bpod/valid-pod patched
I0916 02:39:56.288] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0916 02:39:56.451] (B+++ [0916 02:39:56] "kubectl patch with resourceVersion 497" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0916 02:39:56.679] pod "valid-pod" deleted
I0916 02:39:56.688] pod/valid-pod replaced
I0916 02:39:56.779] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0916 02:39:56.927] (BSuccessful
I0916 02:39:56.928] message:error: --grace-period must have --force specified
I0916 02:39:56.929] has:\-\-grace-period must have \-\-force specified
I0916 02:39:57.074] Successful
I0916 02:39:57.074] message:error: --timeout must have --force specified
I0916 02:39:57.074] has:\-\-timeout must have \-\-force specified
W0916 02:39:57.222] W0916 02:39:57.222206   52852 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0916 02:39:57.323] node/node-v1-test created
I0916 02:39:57.376] node/node-v1-test replaced
I0916 02:39:57.471] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0916 02:39:57.546] (Bnode "node-v1-test" deleted
I0916 02:39:57.640] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0916 02:39:57.902] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 63 lines ...
I0916 02:40:03.138] +++ [0916 02:40:03] Testing kubectl --save-config
I0916 02:40:03.147] +++ [0916 02:40:03] Creating namespace namespace-1568601603-20862
W0916 02:40:03.248] Edit cancelled, no changes made.
W0916 02:40:03.249] Edit cancelled, no changes made.
W0916 02:40:03.250] Edit cancelled, no changes made.
W0916 02:40:03.251] Edit cancelled, no changes made.
W0916 02:40:03.251] error: 'name' already has a value (valid-pod), and --overwrite is false
W0916 02:40:03.253] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 02:40:03.254] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0916 02:40:03.355] namespace/namespace-1568601603-20862 created
I0916 02:40:03.516] Context "test" modified.
I0916 02:40:03.727] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:04.109] (Bpod/test-pod created
... skipping 45 lines ...
I0916 02:40:11.903] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0916 02:40:11.910] +++ working dir: /go/src/k8s.io/kubernetes
I0916 02:40:11.917] +++ command: run_kubectl_create_error_tests
I0916 02:40:11.939] +++ [0916 02:40:11] Creating namespace namespace-1568601611-29253
I0916 02:40:12.107] namespace/namespace-1568601611-29253 created
I0916 02:40:12.279] Context "test" modified.
I0916 02:40:12.293] +++ [0916 02:40:12] Testing kubectl create with error
W0916 02:40:12.422] Error: must specify one of -f and -k
W0916 02:40:12.422] 
W0916 02:40:12.423] Create a resource from a file or from stdin.
W0916 02:40:12.423] 
W0916 02:40:12.424]  JSON and YAML formats are accepted.
W0916 02:40:12.425] 
W0916 02:40:12.425] Examples:
... skipping 41 lines ...
W0916 02:40:12.441] 
W0916 02:40:12.441] Usage:
W0916 02:40:12.441]   kubectl create -f FILENAME [options]
W0916 02:40:12.442] 
W0916 02:40:12.443] Use "kubectl <command> --help" for more information about a given command.
W0916 02:40:12.443] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0916 02:40:12.768] +++ [0916 02:40:12] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0916 02:40:12.927] kubectl convert is DEPRECATED and will be removed in a future version.
W0916 02:40:12.928] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0916 02:40:13.180] +++ exit code: 0
I0916 02:40:13.265] Recording: run_kubectl_apply_tests
I0916 02:40:13.267] Running command: run_kubectl_apply_tests
I0916 02:40:13.327] 
... skipping 16 lines ...
I0916 02:40:17.001] apply.sh:276: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0916 02:40:17.075] (Bpod "test-pod" deleted
I0916 02:40:17.291] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0916 02:40:17.561] I0916 02:40:17.561126   49305 client.go:361] parsed scheme: "endpoint"
W0916 02:40:17.562] I0916 02:40:17.561193   49305 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0916 02:40:17.566] I0916 02:40:17.565880   49305 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0916 02:40:17.653] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0916 02:40:17.754] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0916 02:40:17.755] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0916 02:40:17.758] +++ exit code: 0
I0916 02:40:17.789] Recording: run_kubectl_run_tests
I0916 02:40:17.789] Running command: run_kubectl_run_tests
I0916 02:40:17.810] 
... skipping 84 lines ...
I0916 02:40:20.128] Context "test" modified.
I0916 02:40:20.135] +++ [0916 02:40:20] Testing kubectl create filter
I0916 02:40:20.219] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:20.368] (Bpod/selector-test-pod created
I0916 02:40:20.460] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0916 02:40:20.542] (BSuccessful
I0916 02:40:20.542] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0916 02:40:20.542] has:pods "selector-test-pod-dont-apply" not found
I0916 02:40:20.620] pod "selector-test-pod" deleted
I0916 02:40:20.641] +++ exit code: 0
I0916 02:40:20.673] Recording: run_kubectl_apply_deployments_tests
I0916 02:40:20.673] Running command: run_kubectl_apply_deployments_tests
I0916 02:40:20.694] 
... skipping 31 lines ...
W0916 02:40:22.267] I0916 02:40:19.034859   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601617-12007", Name:"nginx-apps", UID:"8af8a0c3-d49f-49d6-a719-34525957a820", APIVersion:"apps/v1", ResourceVersion:"527", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-apps-f88d5cfc9 to 1
W0916 02:40:22.268] I0916 02:40:19.038085   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601617-12007", Name:"nginx-apps-f88d5cfc9", UID:"b78ccafa-3436-4063-9b4a-65006258c9ab", APIVersion:"apps/v1", ResourceVersion:"528", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-apps-f88d5cfc9-76fgs
W0916 02:40:22.268] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 02:40:22.268] I0916 02:40:19.466116   49305 controller.go:606] quota admission added evaluator for: cronjobs.batch
W0916 02:40:22.269] I0916 02:40:21.276420   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601620-9960", Name:"my-depl", UID:"0ab90185-5599-463e-9199-432a7aedd957", APIVersion:"apps/v1", ResourceVersion:"559", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-64b97f7d4d to 1
W0916 02:40:22.269] I0916 02:40:21.280662   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601620-9960", Name:"my-depl-64b97f7d4d", UID:"4c8bba16-9939-43f1-ae38-ecd05fc2d3e1", APIVersion:"apps/v1", ResourceVersion:"560", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-64b97f7d4d-hh9tf
W0916 02:40:22.270] E0916 02:40:22.165535   52852 replica_set.go:450] Sync "namespace-1568601620-9960/my-depl-64b97f7d4d" failed with replicasets.apps "my-depl-64b97f7d4d" not found
I0916 02:40:22.370] apps.sh:138: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:22.370] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:22.447] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:22.532] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:22.687] (Bdeployment.apps/nginx created
I0916 02:40:22.788] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0916 02:40:26.983] (BSuccessful
I0916 02:40:26.983] message:Error from server (Conflict): error when applying patch:
I0916 02:40:26.984] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568601620-9960\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0916 02:40:26.984] to:
I0916 02:40:26.984] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0916 02:40:26.984] Name: "nginx", Namespace: "namespace-1568601620-9960"
I0916 02:40:26.986] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568601620-9960\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-16T02:40:22Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568601620-9960" "resourceVersion":"595" "selfLink":"/apis/apps/v1/namespaces/namespace-1568601620-9960/deployments/nginx" "uid":"547a9ef8-1af0-48c3-8fd5-7d0505bd71ce"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-16T02:40:22Z" "lastUpdateTime":"2019-09-16T02:40:22Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-16T02:40:22Z" "lastUpdateTime":"2019-09-16T02:40:22Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0916 02:40:26.986] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0916 02:40:26.987] has:Error from server (Conflict)
W0916 02:40:27.087] I0916 02:40:22.690317   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601620-9960", Name:"nginx", UID:"547a9ef8-1af0-48c3-8fd5-7d0505bd71ce", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0916 02:40:27.088] I0916 02:40:22.693974   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601620-9960", Name:"nginx-8484dd655", UID:"3624f9e5-49b6-40ab-8c36-ca39efe011a2", APIVersion:"apps/v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-xb5fz
W0916 02:40:27.088] I0916 02:40:22.697552   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601620-9960", Name:"nginx-8484dd655", UID:"3624f9e5-49b6-40ab-8c36-ca39efe011a2", APIVersion:"apps/v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-78pp8
W0916 02:40:27.088] I0916 02:40:22.699682   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601620-9960", Name:"nginx-8484dd655", UID:"3624f9e5-49b6-40ab-8c36-ca39efe011a2", APIVersion:"apps/v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-wt6n5
W0916 02:40:27.089] I0916 02:40:25.582445   52852 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568601606-8421
I0916 02:40:32.184] deployment.apps/nginx configured
... skipping 146 lines ...
I0916 02:40:39.481] +++ [0916 02:40:39] Creating namespace namespace-1568601639-24439
I0916 02:40:39.560] namespace/namespace-1568601639-24439 created
I0916 02:40:39.635] Context "test" modified.
I0916 02:40:39.641] +++ [0916 02:40:39] Testing kubectl get
I0916 02:40:39.731] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:39.817] (BSuccessful
I0916 02:40:39.818] message:Error from server (NotFound): pods "abc" not found
I0916 02:40:39.818] has:pods "abc" not found
I0916 02:40:39.913] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:40.001] (BSuccessful
I0916 02:40:40.001] message:Error from server (NotFound): pods "abc" not found
I0916 02:40:40.002] has:pods "abc" not found
I0916 02:40:40.090] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:40.172] (BSuccessful
I0916 02:40:40.173] message:{
I0916 02:40:40.173]     "apiVersion": "v1",
I0916 02:40:40.173]     "items": [],
... skipping 23 lines ...
I0916 02:40:40.516] has not:No resources found
I0916 02:40:40.606] Successful
I0916 02:40:40.607] message:NAME
I0916 02:40:40.607] has not:No resources found
I0916 02:40:40.697] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:40.798] (BSuccessful
I0916 02:40:40.799] message:error: the server doesn't have a resource type "foobar"
I0916 02:40:40.800] has not:No resources found
I0916 02:40:40.886] Successful
I0916 02:40:40.887] message:No resources found in namespace-1568601639-24439 namespace.
I0916 02:40:40.887] has:No resources found
I0916 02:40:40.970] Successful
I0916 02:40:40.970] message:
I0916 02:40:40.970] has not:No resources found
I0916 02:40:41.058] Successful
I0916 02:40:41.059] message:No resources found in namespace-1568601639-24439 namespace.
I0916 02:40:41.059] has:No resources found
I0916 02:40:41.164] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:41.256] (BSuccessful
I0916 02:40:41.256] message:Error from server (NotFound): pods "abc" not found
I0916 02:40:41.257] has:pods "abc" not found
I0916 02:40:41.257] FAIL!
I0916 02:40:41.257] message:Error from server (NotFound): pods "abc" not found
I0916 02:40:41.258] has not:List
I0916 02:40:41.258] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0916 02:40:41.373] Successful
I0916 02:40:41.374] message:I0916 02:40:41.324866   62866 loader.go:375] Config loaded from file:  /tmp/tmp.c7QY9FG3Az/.kube/config
I0916 02:40:41.374] I0916 02:40:41.326385   62866 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0916 02:40:41.374] I0916 02:40:41.347879   62866 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0916 02:40:46.923] Successful
I0916 02:40:46.923] message:NAME    DATA   AGE
I0916 02:40:46.923] one     0      0s
I0916 02:40:46.923] three   0      0s
I0916 02:40:46.924] two     0      0s
I0916 02:40:46.924] STATUS    REASON          MESSAGE
I0916 02:40:46.924] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 02:40:46.924] has not:watch is only supported on individual resources
I0916 02:40:48.009] Successful
I0916 02:40:48.010] message:STATUS    REASON          MESSAGE
I0916 02:40:48.010] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 02:40:48.010] has not:watch is only supported on individual resources
I0916 02:40:48.015] +++ [0916 02:40:48] Creating namespace namespace-1568601648-8540
I0916 02:40:48.084] namespace/namespace-1568601648-8540 created
I0916 02:40:48.151] Context "test" modified.
I0916 02:40:48.237] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:48.385] (Bpod/valid-pod created
... skipping 56 lines ...
I0916 02:40:48.473] }
I0916 02:40:48.552] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 02:40:48.793] (B<no value>Successful
I0916 02:40:48.794] message:valid-pod:
I0916 02:40:48.794] has:valid-pod:
I0916 02:40:48.875] Successful
I0916 02:40:48.875] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0916 02:40:48.876] 	template was:
I0916 02:40:48.876] 		{.missing}
I0916 02:40:48.876] 	object given to jsonpath engine was:
I0916 02:40:48.877] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-16T02:40:48Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568601648-8540", "resourceVersion":"697", "selfLink":"/api/v1/namespaces/namespace-1568601648-8540/pods/valid-pod", "uid":"855d565a-bb22-441f-b5d0-ed6af143e2bf"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0916 02:40:48.877] has:missing is not found
I0916 02:40:48.953] Successful
I0916 02:40:48.954] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0916 02:40:48.954] 	template was:
I0916 02:40:48.954] 		{{.missing}}
I0916 02:40:48.954] 	raw data was:
I0916 02:40:48.955] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-16T02:40:48Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568601648-8540","resourceVersion":"697","selfLink":"/api/v1/namespaces/namespace-1568601648-8540/pods/valid-pod","uid":"855d565a-bb22-441f-b5d0-ed6af143e2bf"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0916 02:40:48.955] 	object given to template engine was:
I0916 02:40:48.956] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-16T02:40:48Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568601648-8540 resourceVersion:697 selfLink:/api/v1/namespaces/namespace-1568601648-8540/pods/valid-pod uid:855d565a-bb22-441f-b5d0-ed6af143e2bf] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0916 02:40:48.956] has:map has no entry for key "missing"
W0916 02:40:49.057] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0916 02:40:50.036] Successful
I0916 02:40:50.036] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 02:40:50.036] valid-pod   0/1     Pending   0          1s
I0916 02:40:50.036] STATUS      REASON          MESSAGE
I0916 02:40:50.036] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 02:40:50.037] has:STATUS
I0916 02:40:50.038] Successful
I0916 02:40:50.038] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 02:40:50.038] valid-pod   0/1     Pending   0          1s
I0916 02:40:50.038] STATUS      REASON          MESSAGE
I0916 02:40:50.039] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 02:40:50.039] has:valid-pod
I0916 02:40:51.118] Successful
I0916 02:40:51.118] message:pod/valid-pod
I0916 02:40:51.119] has not:STATUS
I0916 02:40:51.120] Successful
I0916 02:40:51.120] message:pod/valid-pod
... skipping 72 lines ...
I0916 02:40:52.210] status:
I0916 02:40:52.210]   phase: Pending
I0916 02:40:52.210]   qosClass: Guaranteed
I0916 02:40:52.210] ---
I0916 02:40:52.210] has:name: valid-pod
I0916 02:40:52.289] Successful
I0916 02:40:52.289] message:Error from server (NotFound): pods "invalid-pod" not found
I0916 02:40:52.289] has:"invalid-pod" not found
I0916 02:40:52.365] pod "valid-pod" deleted
I0916 02:40:52.455] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:40:52.607] (Bpod/redis-master created
I0916 02:40:52.610] pod/valid-pod created
I0916 02:40:52.698] Successful
... skipping 35 lines ...
I0916 02:40:53.785] +++ command: run_kubectl_exec_pod_tests
I0916 02:40:53.796] +++ [0916 02:40:53] Creating namespace namespace-1568601653-31079
I0916 02:40:53.871] namespace/namespace-1568601653-31079 created
I0916 02:40:53.942] Context "test" modified.
I0916 02:40:53.948] +++ [0916 02:40:53] Testing kubectl exec POD COMMAND
I0916 02:40:54.035] Successful
I0916 02:40:54.036] message:Error from server (NotFound): pods "abc" not found
I0916 02:40:54.036] has:pods "abc" not found
I0916 02:40:54.193] pod/test-pod created
I0916 02:40:54.292] Successful
I0916 02:40:54.292] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 02:40:54.293] has not:pods "test-pod" not found
I0916 02:40:54.295] Successful
I0916 02:40:54.295] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 02:40:54.295] has not:pod or type/name must be specified
I0916 02:40:54.381] pod "test-pod" deleted
I0916 02:40:54.400] +++ exit code: 0
I0916 02:40:54.431] Recording: run_kubectl_exec_resource_name_tests
I0916 02:40:54.432] Running command: run_kubectl_exec_resource_name_tests
I0916 02:40:54.453] 
... skipping 2 lines ...
I0916 02:40:54.460] +++ command: run_kubectl_exec_resource_name_tests
I0916 02:40:54.471] +++ [0916 02:40:54] Creating namespace namespace-1568601654-5336
I0916 02:40:54.540] namespace/namespace-1568601654-5336 created
I0916 02:40:54.615] Context "test" modified.
I0916 02:40:54.622] +++ [0916 02:40:54] Testing kubectl exec TYPE/NAME COMMAND
I0916 02:40:54.732] Successful
I0916 02:40:54.733] message:error: the server doesn't have a resource type "foo"
I0916 02:40:54.733] has:error:
I0916 02:40:54.815] Successful
I0916 02:40:54.816] message:Error from server (NotFound): deployments.apps "bar" not found
I0916 02:40:54.816] has:"bar" not found
I0916 02:40:54.974] pod/test-pod created
I0916 02:40:55.130] replicaset.apps/frontend created
W0916 02:40:55.231] I0916 02:40:55.133663   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601654-5336", Name:"frontend", UID:"05fb3256-2ef6-4a5b-bd58-e326ab7efbd1", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2qvt6
W0916 02:40:55.231] I0916 02:40:55.136447   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601654-5336", Name:"frontend", UID:"05fb3256-2ef6-4a5b-bd58-e326ab7efbd1", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9l2bw
W0916 02:40:55.232] I0916 02:40:55.136734   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601654-5336", Name:"frontend", UID:"05fb3256-2ef6-4a5b-bd58-e326ab7efbd1", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kh4bj
I0916 02:40:55.332] configmap/test-set-env-config created
I0916 02:40:55.377] Successful
I0916 02:40:55.378] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0916 02:40:55.378] has:not implemented
I0916 02:40:55.464] Successful
I0916 02:40:55.465] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 02:40:55.466] has not:not found
I0916 02:40:55.466] Successful
I0916 02:40:55.467] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 02:40:55.467] has not:pod or type/name must be specified
I0916 02:40:55.562] Successful
I0916 02:40:55.563] message:Error from server (BadRequest): pod frontend-2qvt6 does not have a host assigned
I0916 02:40:55.563] has not:not found
I0916 02:40:55.564] Successful
I0916 02:40:55.564] message:Error from server (BadRequest): pod frontend-2qvt6 does not have a host assigned
I0916 02:40:55.565] has not:pod or type/name must be specified
I0916 02:40:55.641] pod "test-pod" deleted
I0916 02:40:55.721] replicaset.apps "frontend" deleted
I0916 02:40:55.801] configmap "test-set-env-config" deleted
I0916 02:40:55.820] +++ exit code: 0
I0916 02:40:55.852] Recording: run_create_secret_tests
I0916 02:40:55.852] Running command: run_create_secret_tests
I0916 02:40:55.874] 
I0916 02:40:55.877] +++ Running case: test-cmd.run_create_secret_tests 
I0916 02:40:55.879] +++ working dir: /go/src/k8s.io/kubernetes
I0916 02:40:55.882] +++ command: run_create_secret_tests
I0916 02:40:55.973] Successful
I0916 02:40:55.973] message:Error from server (NotFound): secrets "mysecret" not found
I0916 02:40:55.974] has:secrets "mysecret" not found
I0916 02:40:56.128] Successful
I0916 02:40:56.129] message:Error from server (NotFound): secrets "mysecret" not found
I0916 02:40:56.129] has:secrets "mysecret" not found
I0916 02:40:56.130] Successful
I0916 02:40:56.130] message:user-specified
I0916 02:40:56.131] has:user-specified
I0916 02:40:56.198] Successful
I0916 02:40:56.270] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"e7866b4d-547a-455b-b976-836d54acde40","resourceVersion":"771","creationTimestamp":"2019-09-16T02:40:56Z"}}
... skipping 2 lines ...
I0916 02:40:56.429] has:uid
I0916 02:40:56.500] Successful
I0916 02:40:56.500] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"e7866b4d-547a-455b-b976-836d54acde40","resourceVersion":"772","creationTimestamp":"2019-09-16T02:40:56Z"},"data":{"key1":"config1"}}
I0916 02:40:56.500] has:config1
I0916 02:40:56.572] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"e7866b4d-547a-455b-b976-836d54acde40"}}
I0916 02:40:56.654] Successful
I0916 02:40:56.654] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0916 02:40:56.654] has:configmaps "tester-update-cm" not found
I0916 02:40:56.667] +++ exit code: 0
I0916 02:40:56.697] Recording: run_kubectl_create_kustomization_directory_tests
I0916 02:40:56.698] Running command: run_kubectl_create_kustomization_directory_tests
I0916 02:40:56.717] 
I0916 02:40:56.720] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
W0916 02:40:59.258] I0916 02:40:57.145760   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601654-5336", Name:"test-the-deployment-69fdbb5f7d", UID:"c8b57e0e-831b-48ca-b1ce-c387869298e3", APIVersion:"apps/v1", ResourceVersion:"780", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-b4tk8
W0916 02:40:59.259] I0916 02:40:57.145838   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601654-5336", Name:"test-the-deployment-69fdbb5f7d", UID:"c8b57e0e-831b-48ca-b1ce-c387869298e3", APIVersion:"apps/v1", ResourceVersion:"780", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-qf98t
I0916 02:41:00.235] Successful
I0916 02:41:00.235] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 02:41:00.235] valid-pod   0/1     Pending   0          1s
I0916 02:41:00.235] STATUS      REASON          MESSAGE
I0916 02:41:00.236] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 02:41:00.236] has:Timeout exceeded while reading body
I0916 02:41:00.314] Successful
I0916 02:41:00.314] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 02:41:00.314] valid-pod   0/1     Pending   0          2s
I0916 02:41:00.314] has:valid-pod
I0916 02:41:00.379] Successful
I0916 02:41:00.380] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0916 02:41:00.380] has:Invalid timeout value
I0916 02:41:00.455] pod "valid-pod" deleted
I0916 02:41:00.473] +++ exit code: 0
I0916 02:41:00.504] Recording: run_crd_tests
I0916 02:41:00.505] Running command: run_crd_tests
I0916 02:41:00.526] 
... skipping 155 lines ...
I0916 02:41:04.895] foo.company.com/test patched
I0916 02:41:04.980] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0916 02:41:05.057] (Bfoo.company.com/test patched
I0916 02:41:05.146] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0916 02:41:05.222] (Bfoo.company.com/test patched
I0916 02:41:05.315] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0916 02:41:05.464] (B+++ [0916 02:41:05] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0916 02:41:05.528] {
I0916 02:41:05.528]     "apiVersion": "company.com/v1",
I0916 02:41:05.528]     "kind": "Foo",
I0916 02:41:05.528]     "metadata": {
I0916 02:41:05.528]         "annotations": {
I0916 02:41:05.528]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 212 lines ...
I0916 02:41:34.441] +++ [0916 02:41:34] Testing cmd with image
I0916 02:41:34.532] Successful
I0916 02:41:34.532] message:deployment.apps/test1 created
I0916 02:41:34.533] has:deployment.apps/test1 created
I0916 02:41:34.613] deployment.apps "test1" deleted
I0916 02:41:34.698] Successful
I0916 02:41:34.698] message:error: Invalid image name "InvalidImageName": invalid reference format
I0916 02:41:34.699] has:error: Invalid image name "InvalidImageName": invalid reference format
I0916 02:41:34.712] +++ exit code: 0
I0916 02:41:34.751] +++ [0916 02:41:34] Testing recursive resources
I0916 02:41:34.756] +++ [0916 02:41:34] Creating namespace namespace-1568601694-31690
I0916 02:41:34.837] namespace/namespace-1568601694-31690 created
I0916 02:41:34.911] Context "test" modified.
I0916 02:41:35.002] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:35.278] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:35.281] (BSuccessful
I0916 02:41:35.281] message:pod/busybox0 created
I0916 02:41:35.282] pod/busybox1 created
I0916 02:41:35.282] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 02:41:35.282] has:error validating data: kind not set
I0916 02:41:35.370] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:35.538] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0916 02:41:35.540] (BSuccessful
I0916 02:41:35.541] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:35.541] has:Object 'Kind' is missing
I0916 02:41:35.626] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:35.915] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0916 02:41:35.917] (BSuccessful
I0916 02:41:35.918] message:pod/busybox0 replaced
I0916 02:41:35.918] pod/busybox1 replaced
I0916 02:41:35.918] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 02:41:35.918] has:error validating data: kind not set
I0916 02:41:36.006] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:36.101] (BSuccessful
I0916 02:41:36.101] message:Name:         busybox0
I0916 02:41:36.101] Namespace:    namespace-1568601694-31690
I0916 02:41:36.101] Priority:     0
I0916 02:41:36.101] Node:         <none>
... skipping 159 lines ...
I0916 02:41:36.115] has:Object 'Kind' is missing
I0916 02:41:36.193] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:36.367] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0916 02:41:36.369] (BSuccessful
I0916 02:41:36.370] message:pod/busybox0 annotated
I0916 02:41:36.370] pod/busybox1 annotated
I0916 02:41:36.371] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:36.371] has:Object 'Kind' is missing
I0916 02:41:36.454] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:36.715] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0916 02:41:36.718] (BSuccessful
I0916 02:41:36.718] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0916 02:41:36.718] pod/busybox0 configured
I0916 02:41:36.719] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0916 02:41:36.719] pod/busybox1 configured
I0916 02:41:36.719] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 02:41:36.719] has:error validating data: kind not set
I0916 02:41:36.801] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:36.951] (Bdeployment.apps/nginx created
I0916 02:41:37.048] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 02:41:37.136] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0916 02:41:37.299] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0916 02:41:37.302] (BSuccessful
... skipping 42 lines ...
I0916 02:41:37.378] deployment.apps "nginx" deleted
I0916 02:41:37.474] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:37.640] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:37.643] (BSuccessful
I0916 02:41:37.643] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0916 02:41:37.643] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0916 02:41:37.643] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:37.644] has:Object 'Kind' is missing
I0916 02:41:37.732] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:37.813] (BSuccessful
I0916 02:41:37.813] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:37.813] has:busybox0:busybox1:
I0916 02:41:37.815] Successful
I0916 02:41:37.815] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:37.816] has:Object 'Kind' is missing
I0916 02:41:37.904] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:37.990] (Bpod/busybox0 labeled
I0916 02:41:37.991] pod/busybox1 labeled
I0916 02:41:37.991] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:38.077] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0916 02:41:38.080] (BSuccessful
I0916 02:41:38.081] message:pod/busybox0 labeled
I0916 02:41:38.081] pod/busybox1 labeled
I0916 02:41:38.081] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:38.081] has:Object 'Kind' is missing
I0916 02:41:38.171] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:38.252] (Bpod/busybox0 patched
I0916 02:41:38.253] pod/busybox1 patched
I0916 02:41:38.253] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:38.338] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0916 02:41:38.341] (BSuccessful
I0916 02:41:38.341] message:pod/busybox0 patched
I0916 02:41:38.341] pod/busybox1 patched
I0916 02:41:38.342] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:38.342] has:Object 'Kind' is missing
I0916 02:41:38.427] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:38.598] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:38.600] (BSuccessful
I0916 02:41:38.600] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 02:41:38.601] pod "busybox0" force deleted
I0916 02:41:38.601] pod "busybox1" force deleted
I0916 02:41:38.601] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 02:41:38.601] has:Object 'Kind' is missing
I0916 02:41:38.689] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:38.846] (Breplicationcontroller/busybox0 created
I0916 02:41:38.850] replicationcontroller/busybox1 created
W0916 02:41:38.951] Error from server (NotFound): namespaces "non-native-resources" not found
W0916 02:41:38.951] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 02:41:38.952] I0916 02:41:34.519149   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601694-14928", Name:"test1", UID:"0398e487-1f13-408c-90ac-30fd4f624056", APIVersion:"apps/v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-6cdffdb5b8 to 1
W0916 02:41:38.952] I0916 02:41:34.523858   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601694-14928", Name:"test1-6cdffdb5b8", UID:"2e28ff06-3c91-4dd5-a2bf-e9dfea1bf59f", APIVersion:"apps/v1", ResourceVersion:"927", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-gbmv8
W0916 02:41:38.952] W0916 02:41:34.920136   49305 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 02:41:38.952] E0916 02:41:34.921892   52852 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.952] W0916 02:41:35.016586   49305 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 02:41:38.953] E0916 02:41:35.018006   52852 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.953] W0916 02:41:35.110129   49305 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 02:41:38.953] E0916 02:41:35.112101   52852 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.953] W0916 02:41:35.203727   49305 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 02:41:38.953] E0916 02:41:35.205881   52852 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.954] E0916 02:41:35.923466   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.954] E0916 02:41:36.019358   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.954] E0916 02:41:36.113812   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.955] E0916 02:41:36.207234   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.955] E0916 02:41:36.924712   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.955] I0916 02:41:36.954308   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601694-31690", Name:"nginx", UID:"419fcff5-346c-4ac7-80d1-8a2ef9774d96", APIVersion:"apps/v1", ResourceVersion:"951", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W0916 02:41:38.956] I0916 02:41:36.957521   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601694-31690", Name:"nginx-f87d999f7", UID:"0051154e-6733-4267-bd78-6b51eaab3205", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-n8cdk
W0916 02:41:38.956] I0916 02:41:36.960243   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601694-31690", Name:"nginx-f87d999f7", UID:"0051154e-6733-4267-bd78-6b51eaab3205", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-fzf5l
W0916 02:41:38.956] I0916 02:41:36.960810   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601694-31690", Name:"nginx-f87d999f7", UID:"0051154e-6733-4267-bd78-6b51eaab3205", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-fmgzn
W0916 02:41:38.957] E0916 02:41:37.021066   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.957] E0916 02:41:37.115646   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.957] kubectl convert is DEPRECATED and will be removed in a future version.
W0916 02:41:38.957] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0916 02:41:38.958] E0916 02:41:37.208629   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.958] E0916 02:41:37.926133   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.958] E0916 02:41:38.022595   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.959] E0916 02:41:38.116958   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.959] E0916 02:41:38.210209   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:38.959] I0916 02:41:38.651363   52852 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0916 02:41:38.960] I0916 02:41:38.849995   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601694-31690", Name:"busybox0", UID:"8d048ce5-83e5-4cd5-bc6e-a51c59d4b188", APIVersion:"v1", ResourceVersion:"982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-rpz8p
W0916 02:41:38.960] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 02:41:38.961] I0916 02:41:38.854043   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601694-31690", Name:"busybox1", UID:"d17dc869-4244-49ee-a2b7-63c5ef22cff3", APIVersion:"v1", ResourceVersion:"984", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-r54hw
W0916 02:41:38.961] E0916 02:41:38.927499   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:39.024] E0916 02:41:39.023995   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:39.119] E0916 02:41:39.118722   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:39.211] E0916 02:41:39.211135   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:41:39.312] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:39.313] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:39.313] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 02:41:39.313] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 02:41:39.401] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0916 02:41:39.486] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0916 02:41:39.488] (BSuccessful
I0916 02:41:39.489] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0916 02:41:39.489] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0916 02:41:39.489] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:39.490] has:Object 'Kind' is missing
I0916 02:41:39.564] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0916 02:41:39.648] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0916 02:41:39.740] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:39.826] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 02:41:39.914] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 02:41:40.106] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 02:41:40.193] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 02:41:40.195] (BSuccessful
I0916 02:41:40.195] message:service/busybox0 exposed
I0916 02:41:40.195] service/busybox1 exposed
I0916 02:41:40.196] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:40.196] has:Object 'Kind' is missing
I0916 02:41:40.282] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:40.365] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 02:41:40.455] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 02:41:40.646] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0916 02:41:40.732] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0916 02:41:40.736] (BSuccessful
I0916 02:41:40.736] message:replicationcontroller/busybox0 scaled
I0916 02:41:40.736] replicationcontroller/busybox1 scaled
I0916 02:41:40.737] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:40.737] has:Object 'Kind' is missing
I0916 02:41:40.825] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:40.998] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:41.000] (BSuccessful
I0916 02:41:41.001] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 02:41:41.001] replicationcontroller "busybox0" force deleted
I0916 02:41:41.002] replicationcontroller "busybox1" force deleted
I0916 02:41:41.002] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:41.003] has:Object 'Kind' is missing
I0916 02:41:41.087] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:41.233] (Bdeployment.apps/nginx1-deployment created
I0916 02:41:41.237] deployment.apps/nginx0-deployment created
I0916 02:41:41.341] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0916 02:41:41.428] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0916 02:41:41.620] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0916 02:41:41.623] (BSuccessful
I0916 02:41:41.623] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0916 02:41:41.623] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0916 02:41:41.624] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 02:41:41.624] has:Object 'Kind' is missing
I0916 02:41:41.709] deployment.apps/nginx1-deployment paused
I0916 02:41:41.716] deployment.apps/nginx0-deployment paused
I0916 02:41:41.813] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0916 02:41:41.815] (BSuccessful
I0916 02:41:41.816] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 02:41:41.816] has:Object 'Kind' is missing
I0916 02:41:41.906] deployment.apps/nginx1-deployment resumed
I0916 02:41:41.910] deployment.apps/nginx0-deployment resumed
I0916 02:41:42.004] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0916 02:41:42.007] (BSuccessful
I0916 02:41:42.007] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 02:41:42.008] has:Object 'Kind' is missing
W0916 02:41:42.109] E0916 02:41:39.929803   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.109] E0916 02:41:40.025148   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.110] E0916 02:41:40.120202   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.110] E0916 02:41:40.212452   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.110] I0916 02:41:40.543482   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601694-31690", Name:"busybox0", UID:"8d048ce5-83e5-4cd5-bc6e-a51c59d4b188", APIVersion:"v1", ResourceVersion:"1003", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-5fmkb
W0916 02:41:42.111] I0916 02:41:40.557023   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601694-31690", Name:"busybox1", UID:"d17dc869-4244-49ee-a2b7-63c5ef22cff3", APIVersion:"v1", ResourceVersion:"1008", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-rwjl7
W0916 02:41:42.111] E0916 02:41:40.931341   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.111] E0916 02:41:41.026672   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.111] E0916 02:41:41.121561   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.112] E0916 02:41:41.214128   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.112] I0916 02:41:41.236523   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601694-31690", Name:"nginx1-deployment", UID:"88597fb9-90e3-4038-936d-06bb981464ae", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W0916 02:41:42.112] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 02:41:42.113] I0916 02:41:41.239800   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601694-31690", Name:"nginx1-deployment-7bdbbfb5cf", UID:"8ac6ddb0-73d3-4c58-987c-62ea3e85d920", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-kzwkn
W0916 02:41:42.113] I0916 02:41:41.241601   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601694-31690", Name:"nginx0-deployment", UID:"f3c25b56-bb3d-4035-8fa3-d9fa0cfa1a0b", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W0916 02:41:42.114] I0916 02:41:41.242106   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601694-31690", Name:"nginx1-deployment-7bdbbfb5cf", UID:"8ac6ddb0-73d3-4c58-987c-62ea3e85d920", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-49h8z
W0916 02:41:42.115] I0916 02:41:41.245526   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601694-31690", Name:"nginx0-deployment-57c6bff7f6", UID:"4dc94a86-b6f3-44b9-b9f5-4dd213323c6e", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-9qtn5
W0916 02:41:42.115] I0916 02:41:41.247465   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601694-31690", Name:"nginx0-deployment-57c6bff7f6", UID:"4dc94a86-b6f3-44b9-b9f5-4dd213323c6e", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-6x5rr
W0916 02:41:42.115] E0916 02:41:41.932816   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.116] E0916 02:41:42.027995   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.123] E0916 02:41:42.123182   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:42.182] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 02:41:42.197] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0916 02:41:42.215] E0916 02:41:42.215184   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:41:42.316] Successful
I0916 02:41:42.316] message:deployment.apps/nginx1-deployment 
I0916 02:41:42.317] REVISION  CHANGE-CAUSE
I0916 02:41:42.317] 1         <none>
I0916 02:41:42.317] 
I0916 02:41:42.317] deployment.apps/nginx0-deployment 
I0916 02:41:42.317] REVISION  CHANGE-CAUSE
I0916 02:41:42.317] 1         <none>
I0916 02:41:42.317] 
I0916 02:41:42.317] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 02:41:42.317] has:nginx0-deployment
I0916 02:41:42.318] Successful
I0916 02:41:42.318] message:deployment.apps/nginx1-deployment 
I0916 02:41:42.318] REVISION  CHANGE-CAUSE
I0916 02:41:42.318] 1         <none>
I0916 02:41:42.318] 
I0916 02:41:42.318] deployment.apps/nginx0-deployment 
I0916 02:41:42.318] REVISION  CHANGE-CAUSE
I0916 02:41:42.318] 1         <none>
I0916 02:41:42.318] 
I0916 02:41:42.319] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 02:41:42.319] has:nginx1-deployment
I0916 02:41:42.319] Successful
I0916 02:41:42.319] message:deployment.apps/nginx1-deployment 
I0916 02:41:42.319] REVISION  CHANGE-CAUSE
I0916 02:41:42.319] 1         <none>
I0916 02:41:42.319] 
I0916 02:41:42.319] deployment.apps/nginx0-deployment 
I0916 02:41:42.319] REVISION  CHANGE-CAUSE
I0916 02:41:42.319] 1         <none>
I0916 02:41:42.319] 
I0916 02:41:42.320] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 02:41:42.320] has:Object 'Kind' is missing
I0916 02:41:42.320] deployment.apps "nginx1-deployment" force deleted
I0916 02:41:42.320] deployment.apps "nginx0-deployment" force deleted
W0916 02:41:42.934] E0916 02:41:42.934239   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:43.030] E0916 02:41:43.029459   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:43.125] E0916 02:41:43.124538   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:43.217] E0916 02:41:43.216404   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:41:43.317] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:43.440] (Breplicationcontroller/busybox0 created
I0916 02:41:43.445] replicationcontroller/busybox1 created
I0916 02:41:43.538] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 02:41:43.630] (BSuccessful
I0916 02:41:43.631] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0916 02:41:43.634] message:no rollbacker has been implemented for "ReplicationController"
I0916 02:41:43.634] no rollbacker has been implemented for "ReplicationController"
I0916 02:41:43.634] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:43.634] has:Object 'Kind' is missing
I0916 02:41:43.728] Successful
I0916 02:41:43.729] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:43.729] error: replicationcontrollers "busybox0" pausing is not supported
I0916 02:41:43.729] error: replicationcontrollers "busybox1" pausing is not supported
I0916 02:41:43.729] has:Object 'Kind' is missing
I0916 02:41:43.731] Successful
I0916 02:41:43.731] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:43.732] error: replicationcontrollers "busybox0" pausing is not supported
I0916 02:41:43.732] error: replicationcontrollers "busybox1" pausing is not supported
I0916 02:41:43.732] has:replicationcontrollers "busybox0" pausing is not supported
I0916 02:41:43.734] Successful
I0916 02:41:43.735] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:43.735] error: replicationcontrollers "busybox0" pausing is not supported
I0916 02:41:43.735] error: replicationcontrollers "busybox1" pausing is not supported
I0916 02:41:43.735] has:replicationcontrollers "busybox1" pausing is not supported
W0916 02:41:43.836] I0916 02:41:43.444061   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601694-31690", Name:"busybox0", UID:"50cdbed0-949d-47db-9052-16bae7da7003", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-jrxwc
W0916 02:41:43.836] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 02:41:43.837] I0916 02:41:43.448710   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601694-31690", Name:"busybox1", UID:"ec9dad3c-5b08-41a0-94c0-c3b4ac73b31e", APIVersion:"v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-z27z5
W0916 02:41:43.917] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 02:41:43.934] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0916 02:41:43.937] E0916 02:41:43.937086   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:44.031] E0916 02:41:44.031039   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:44.126] E0916 02:41:44.125845   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:44.218] E0916 02:41:44.217888   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:41:44.319] Successful
I0916 02:41:44.319] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:44.320] error: replicationcontrollers "busybox0" resuming is not supported
I0916 02:41:44.320] error: replicationcontrollers "busybox1" resuming is not supported
I0916 02:41:44.320] has:Object 'Kind' is missing
I0916 02:41:44.320] Successful
I0916 02:41:44.321] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:44.321] error: replicationcontrollers "busybox0" resuming is not supported
I0916 02:41:44.321] error: replicationcontrollers "busybox1" resuming is not supported
I0916 02:41:44.321] has:replicationcontrollers "busybox0" resuming is not supported
I0916 02:41:44.321] Successful
I0916 02:41:44.322] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 02:41:44.322] error: replicationcontrollers "busybox0" resuming is not supported
I0916 02:41:44.322] error: replicationcontrollers "busybox1" resuming is not supported
I0916 02:41:44.322] has:replicationcontrollers "busybox0" resuming is not supported
I0916 02:41:44.322] replicationcontroller "busybox0" force deleted
I0916 02:41:44.322] replicationcontroller "busybox1" force deleted
W0916 02:41:44.939] E0916 02:41:44.938798   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:45.033] E0916 02:41:45.032718   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:45.128] E0916 02:41:45.127118   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:45.219] E0916 02:41:45.219115   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:41:45.320] Recording: run_namespace_tests
I0916 02:41:45.321] Running command: run_namespace_tests
I0916 02:41:45.321] 
I0916 02:41:45.321] +++ Running case: test-cmd.run_namespace_tests 
I0916 02:41:45.322] +++ working dir: /go/src/k8s.io/kubernetes
I0916 02:41:45.322] +++ command: run_namespace_tests
I0916 02:41:45.322] +++ [0916 02:41:44] Testing kubectl(v1:namespaces)
I0916 02:41:45.323] namespace/my-namespace created
I0916 02:41:45.323] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0916 02:41:45.323] (Bnamespace "my-namespace" deleted
W0916 02:41:45.941] E0916 02:41:45.940401   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:46.034] E0916 02:41:46.034074   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:46.129] E0916 02:41:46.128483   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:46.221] E0916 02:41:46.220573   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:46.942] E0916 02:41:46.941842   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:47.036] E0916 02:41:47.035712   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:47.130] E0916 02:41:47.130103   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:47.222] E0916 02:41:47.222098   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:47.943] E0916 02:41:47.943253   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:48.038] E0916 02:41:48.037293   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:48.132] E0916 02:41:48.131559   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:48.224] E0916 02:41:48.223424   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:48.945] E0916 02:41:48.944823   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:49.039] E0916 02:41:49.038654   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:49.133] E0916 02:41:49.132968   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:49.225] E0916 02:41:49.224786   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:49.947] E0916 02:41:49.946356   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:50.040] E0916 02:41:50.039976   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:50.135] E0916 02:41:50.134453   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:50.227] E0916 02:41:50.226352   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:41:50.327] namespace/my-namespace condition met
I0916 02:41:50.389] Successful
I0916 02:41:50.389] message:Error from server (NotFound): namespaces "my-namespace" not found
I0916 02:41:50.389] has: not found
I0916 02:41:50.457] namespace/my-namespace created
I0916 02:41:50.545] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0916 02:41:50.739] (BSuccessful
I0916 02:41:50.739] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0916 02:41:50.740] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0916 02:41:50.746] namespace "namespace-1568601657-25994" deleted
I0916 02:41:50.746] namespace "namespace-1568601658-27407" deleted
I0916 02:41:50.746] namespace "namespace-1568601660-8620" deleted
I0916 02:41:50.746] namespace "namespace-1568601661-25270" deleted
I0916 02:41:50.746] namespace "namespace-1568601694-14928" deleted
I0916 02:41:50.746] namespace "namespace-1568601694-31690" deleted
I0916 02:41:50.747] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0916 02:41:50.747] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0916 02:41:50.747] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0916 02:41:50.747] has:warning: deleting cluster-scoped resources
I0916 02:41:50.747] Successful
I0916 02:41:50.747] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0916 02:41:50.747] namespace "kube-node-lease" deleted
I0916 02:41:50.747] namespace "my-namespace" deleted
I0916 02:41:50.747] namespace "namespace-1568601560-8690" deleted
... skipping 27 lines ...
I0916 02:41:50.750] namespace "namespace-1568601657-25994" deleted
I0916 02:41:50.750] namespace "namespace-1568601658-27407" deleted
I0916 02:41:50.750] namespace "namespace-1568601660-8620" deleted
I0916 02:41:50.750] namespace "namespace-1568601661-25270" deleted
I0916 02:41:50.751] namespace "namespace-1568601694-14928" deleted
I0916 02:41:50.751] namespace "namespace-1568601694-31690" deleted
I0916 02:41:50.751] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0916 02:41:50.751] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0916 02:41:50.751] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0916 02:41:50.751] has:namespace "my-namespace" deleted
I0916 02:41:50.839] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0916 02:41:50.910] (Bnamespace/other created
I0916 02:41:51.003] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0916 02:41:51.089] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:51.246] (Bpod/valid-pod created
I0916 02:41:51.343] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 02:41:51.433] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 02:41:51.514] (BSuccessful
I0916 02:41:51.515] message:error: a resource cannot be retrieved by name across all namespaces
I0916 02:41:51.515] has:a resource cannot be retrieved by name across all namespaces
I0916 02:41:51.601] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 02:41:51.679] (Bpod "valid-pod" force deleted
I0916 02:41:51.768] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:51.840] (Bnamespace "other" deleted
W0916 02:41:51.941] E0916 02:41:50.947896   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:51.941] E0916 02:41:51.042141   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:51.941] E0916 02:41:51.136025   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:51.942] E0916 02:41:51.227794   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:51.942] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 02:41:51.950] E0916 02:41:51.949651   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:52.045] E0916 02:41:52.044566   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:52.138] E0916 02:41:52.137436   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:52.230] E0916 02:41:52.229566   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:52.241] I0916 02:41:52.241133   52852 shared_informer.go:197] Waiting for caches to sync for resource quota
W0916 02:41:52.242] I0916 02:41:52.241197   52852 shared_informer.go:204] Caches are synced for resource quota 
W0916 02:41:52.663] I0916 02:41:52.663300   52852 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0916 02:41:52.664] I0916 02:41:52.663414   52852 shared_informer.go:204] Caches are synced for garbage collector 
W0916 02:41:52.951] E0916 02:41:52.951032   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:53.047] E0916 02:41:53.046325   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:53.139] E0916 02:41:53.138941   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:53.231] E0916 02:41:53.231019   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:53.953] E0916 02:41:53.952754   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:54.048] E0916 02:41:54.047802   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:54.141] E0916 02:41:54.140452   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:54.233] E0916 02:41:54.232467   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:54.305] I0916 02:41:54.304918   52852 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1568601694-31690
W0916 02:41:54.308] I0916 02:41:54.307877   52852 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1568601694-31690
W0916 02:41:54.954] E0916 02:41:54.954196   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:55.049] E0916 02:41:55.049262   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:55.142] E0916 02:41:55.141940   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:55.234] E0916 02:41:55.233994   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:55.956] E0916 02:41:55.955834   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:56.051] E0916 02:41:56.050786   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:56.145] E0916 02:41:56.144790   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:56.236] E0916 02:41:56.235313   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:41:56.954] +++ exit code: 0
I0916 02:41:56.989] Recording: run_secrets_test
I0916 02:41:56.990] Running command: run_secrets_test
I0916 02:41:57.013] 
I0916 02:41:57.016] +++ Running case: test-cmd.run_secrets_test 
I0916 02:41:57.018] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0916 02:41:58.823] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0916 02:41:58.897] (Bsecret "test-secret" deleted
I0916 02:41:58.974] secret/test-secret created
I0916 02:41:59.064] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0916 02:41:59.149] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0916 02:41:59.225] (Bsecret "test-secret" deleted
W0916 02:41:59.326] E0916 02:41:56.957084   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.326] E0916 02:41:57.052205   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.327] E0916 02:41:57.146143   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.327] E0916 02:41:57.236657   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.327] I0916 02:41:57.248157   69060 loader.go:375] Config loaded from file:  /tmp/tmp.c7QY9FG3Az/.kube/config
W0916 02:41:59.327] E0916 02:41:57.958325   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.327] E0916 02:41:58.053540   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.328] E0916 02:41:58.147546   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.328] E0916 02:41:58.237826   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.328] E0916 02:41:58.959804   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.328] E0916 02:41:59.054752   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.328] E0916 02:41:59.148761   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:41:59.329] E0916 02:41:59.239088   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:41:59.429] secret/secret-string-data created
I0916 02:41:59.471] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0916 02:41:59.558] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0916 02:41:59.643] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0916 02:41:59.717] (Bsecret "secret-string-data" deleted
I0916 02:41:59.806] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:41:59.960] (Bsecret "test-secret" deleted
I0916 02:42:00.039] namespace "test-secrets" deleted
W0916 02:42:00.139] E0916 02:41:59.961437   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:00.140] E0916 02:42:00.056124   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:00.150] E0916 02:42:00.150137   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:00.241] E0916 02:42:00.240491   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:00.386] I0916 02:42:00.385733   52852 namespace_controller.go:171] Namespace has been deleted my-namespace
W0916 02:42:00.842] I0916 02:42:00.841741   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601560-8690
W0916 02:42:00.855] I0916 02:42:00.855305   52852 namespace_controller.go:171] Namespace has been deleted kube-node-lease
W0916 02:42:00.857] I0916 02:42:00.857540   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601579-12597
W0916 02:42:00.860] I0916 02:42:00.860107   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601563-10483
W0916 02:42:00.862] I0916 02:42:00.862281   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601577-824
W0916 02:42:00.873] I0916 02:42:00.872828   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601574-18233
W0916 02:42:00.873] I0916 02:42:00.872828   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601578-27440
W0916 02:42:00.884] I0916 02:42:00.884000   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601574-12633
W0916 02:42:00.888] I0916 02:42:00.888171   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601566-17881
W0916 02:42:00.889] I0916 02:42:00.888171   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601571-25608
W0916 02:42:00.963] E0916 02:42:00.963122   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:01.047] I0916 02:42:01.046538   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601588-9479
W0916 02:42:01.058] E0916 02:42:01.057670   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:01.065] I0916 02:42:01.064386   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601599-18986
W0916 02:42:01.065] I0916 02:42:01.065451   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601604-10770
W0916 02:42:01.066] I0916 02:42:01.065492   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601589-8679
W0916 02:42:01.078] I0916 02:42:01.077768   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601600-12860
W0916 02:42:01.087] I0916 02:42:01.087011   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601606-8421
W0916 02:42:01.091] I0916 02:42:01.090461   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601603-20862
W0916 02:42:01.095] I0916 02:42:01.095419   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601613-24529
W0916 02:42:01.101] I0916 02:42:01.100917   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601611-29253
W0916 02:42:01.101] I0916 02:42:01.100951   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601617-12007
W0916 02:42:01.152] E0916 02:42:01.151683   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:01.242] I0916 02:42:01.241869   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601619-23908
W0916 02:42:01.242] E0916 02:42:01.241958   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:01.246] I0916 02:42:01.245789   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601637-15392
W0916 02:42:01.256] I0916 02:42:01.256077   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601638-12187
W0916 02:42:01.275] I0916 02:42:01.274416   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601653-31079
W0916 02:42:01.283] I0916 02:42:01.282435   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601657-25994
W0916 02:42:01.284] I0916 02:42:01.284446   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601639-24439
W0916 02:42:01.293] I0916 02:42:01.293477   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601648-8540
... skipping 3 lines ...
W0916 02:42:01.403] I0916 02:42:01.403340   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601658-27407
W0916 02:42:01.405] I0916 02:42:01.404728   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601661-25270
W0916 02:42:01.424] I0916 02:42:01.424020   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601660-8620
W0916 02:42:01.444] I0916 02:42:01.443487   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601694-14928
W0916 02:42:01.481] I0916 02:42:01.480663   52852 namespace_controller.go:171] Namespace has been deleted namespace-1568601694-31690
W0916 02:42:01.931] I0916 02:42:01.930541   52852 namespace_controller.go:171] Namespace has been deleted other
W0916 02:42:01.965] E0916 02:42:01.964494   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:02.059] E0916 02:42:02.059226   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:02.153] E0916 02:42:02.153107   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:02.244] E0916 02:42:02.243458   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:02.966] E0916 02:42:02.965989   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:03.061] E0916 02:42:03.060523   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:03.155] E0916 02:42:03.154421   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:03.245] E0916 02:42:03.244857   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:03.968] E0916 02:42:03.967395   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:04.062] E0916 02:42:04.062145   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:04.156] E0916 02:42:04.156125   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:04.247] E0916 02:42:04.246707   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:04.969] E0916 02:42:04.969080   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:05.064] E0916 02:42:05.063519   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:05.157] E0916 02:42:05.157418   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:05.249] E0916 02:42:05.248346   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:05.349] +++ exit code: 0
I0916 02:42:05.350] Recording: run_configmap_tests
I0916 02:42:05.350] Running command: run_configmap_tests
I0916 02:42:05.350] 
I0916 02:42:05.350] +++ Running case: test-cmd.run_configmap_tests 
I0916 02:42:05.350] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0916 02:42:06.311] configmap/test-binary-configmap created
I0916 02:42:06.395] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0916 02:42:06.479] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0916 02:42:06.712] (Bconfigmap "test-configmap" deleted
I0916 02:42:06.790] configmap "test-binary-configmap" deleted
I0916 02:42:06.871] namespace "test-configmaps" deleted
W0916 02:42:06.972] E0916 02:42:05.970341   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:06.972] E0916 02:42:06.064762   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:06.972] E0916 02:42:06.158585   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:06.973] E0916 02:42:06.249965   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:06.973] E0916 02:42:06.971685   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:07.067] E0916 02:42:07.066420   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:07.160] E0916 02:42:07.160119   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:07.252] E0916 02:42:07.251872   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:07.975] E0916 02:42:07.972981   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:08.068] E0916 02:42:08.067914   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:08.162] E0916 02:42:08.161578   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:08.254] E0916 02:42:08.253711   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:08.974] E0916 02:42:08.974282   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:09.070] E0916 02:42:09.069541   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:09.163] E0916 02:42:09.162966   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:09.255] E0916 02:42:09.255260   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:09.976] E0916 02:42:09.975803   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:10.071] E0916 02:42:10.071038   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:10.131] I0916 02:42:10.130939   52852 namespace_controller.go:171] Namespace has been deleted test-secrets
W0916 02:42:10.165] E0916 02:42:10.164367   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:10.257] E0916 02:42:10.256792   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:10.978] E0916 02:42:10.977291   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:11.073] E0916 02:42:11.072571   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:11.166] E0916 02:42:11.165689   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:11.258] E0916 02:42:11.258202   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:11.979] E0916 02:42:11.978433   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:12.074] E0916 02:42:12.074030   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:12.168] E0916 02:42:12.167300   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:12.260] E0916 02:42:12.259647   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:12.360] +++ exit code: 0
I0916 02:42:12.361] Recording: run_client_config_tests
I0916 02:42:12.361] Running command: run_client_config_tests
I0916 02:42:12.361] 
I0916 02:42:12.361] +++ Running case: test-cmd.run_client_config_tests 
I0916 02:42:12.361] +++ working dir: /go/src/k8s.io/kubernetes
I0916 02:42:12.361] +++ command: run_client_config_tests
I0916 02:42:12.361] +++ [0916 02:42:12] Creating namespace namespace-1568601732-2147
I0916 02:42:12.361] namespace/namespace-1568601732-2147 created
I0916 02:42:12.362] Context "test" modified.
I0916 02:42:12.362] +++ [0916 02:42:12] Testing client config
I0916 02:42:12.362] Successful
I0916 02:42:12.362] message:error: stat missing: no such file or directory
I0916 02:42:12.362] has:missing: no such file or directory
I0916 02:42:12.362] Successful
I0916 02:42:12.362] message:error: stat missing: no such file or directory
I0916 02:42:12.362] has:missing: no such file or directory
I0916 02:42:12.422] Successful
I0916 02:42:12.422] message:error: stat missing: no such file or directory
I0916 02:42:12.422] has:missing: no such file or directory
I0916 02:42:12.492] Successful
I0916 02:42:12.492] message:Error in configuration: context was not found for specified context: missing-context
I0916 02:42:12.493] has:context was not found for specified context: missing-context
I0916 02:42:12.559] Successful
I0916 02:42:12.559] message:error: no server found for cluster "missing-cluster"
I0916 02:42:12.559] has:no server found for cluster "missing-cluster"
I0916 02:42:12.630] Successful
I0916 02:42:12.631] message:error: auth info "missing-user" does not exist
I0916 02:42:12.631] has:auth info "missing-user" does not exist
I0916 02:42:12.765] Successful
I0916 02:42:12.765] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0916 02:42:12.765] has:error loading config file
I0916 02:42:12.830] Successful
I0916 02:42:12.831] message:error: stat missing-config: no such file or directory
I0916 02:42:12.831] has:no such file or directory
I0916 02:42:12.844] +++ exit code: 0
I0916 02:42:12.877] Recording: run_service_accounts_tests
I0916 02:42:12.878] Running command: run_service_accounts_tests
I0916 02:42:12.899] 
I0916 02:42:12.902] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0916 02:42:13.221] (Bnamespace/test-service-accounts created
I0916 02:42:13.307] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0916 02:42:13.374] (Bserviceaccount/test-service-account created
I0916 02:42:13.459] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0916 02:42:13.531] (Bserviceaccount "test-service-account" deleted
I0916 02:42:13.616] namespace "test-service-accounts" deleted
W0916 02:42:13.716] E0916 02:42:12.979913   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:13.717] E0916 02:42:13.075268   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:13.717] E0916 02:42:13.168696   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:13.718] E0916 02:42:13.260963   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:13.982] E0916 02:42:13.981395   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:14.077] E0916 02:42:14.076781   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:14.170] E0916 02:42:14.170051   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:14.263] E0916 02:42:14.262377   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:14.983] E0916 02:42:14.983101   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:15.079] E0916 02:42:15.078499   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:15.172] E0916 02:42:15.171307   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:15.264] E0916 02:42:15.263593   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:15.985] E0916 02:42:15.984673   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:16.080] E0916 02:42:16.079912   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:16.173] E0916 02:42:16.172579   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:16.265] E0916 02:42:16.264898   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:16.961] I0916 02:42:16.960539   52852 namespace_controller.go:171] Namespace has been deleted test-configmaps
W0916 02:42:16.986] E0916 02:42:16.986161   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:17.082] E0916 02:42:17.081541   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:17.174] E0916 02:42:17.174097   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:17.266] E0916 02:42:17.266173   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:17.988] E0916 02:42:17.987497   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:18.083] E0916 02:42:18.082819   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:18.176] E0916 02:42:18.175494   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:18.268] E0916 02:42:18.267503   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:18.726] +++ exit code: 0
I0916 02:42:18.758] Recording: run_job_tests
I0916 02:42:18.758] Running command: run_job_tests
I0916 02:42:18.780] 
I0916 02:42:18.783] +++ Running case: test-cmd.run_job_tests 
I0916 02:42:18.786] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0916 02:42:19.529] Labels:                        run=pi
I0916 02:42:19.529] Annotations:                   <none>
I0916 02:42:19.529] Schedule:                      59 23 31 2 *
I0916 02:42:19.529] Concurrency Policy:            Allow
I0916 02:42:19.530] Suspend:                       False
I0916 02:42:19.530] Successful Job History Limit:  3
I0916 02:42:19.530] Failed Job History Limit:      1
I0916 02:42:19.530] Starting Deadline Seconds:     <unset>
I0916 02:42:19.530] Selector:                      <unset>
I0916 02:42:19.530] Parallelism:                   <unset>
I0916 02:42:19.531] Completions:                   <unset>
I0916 02:42:19.531] Pod Template:
I0916 02:42:19.531]   Labels:  run=pi
... skipping 32 lines ...
I0916 02:42:20.040]                 run=pi
I0916 02:42:20.040] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0916 02:42:20.041] Controlled By:  CronJob/pi
I0916 02:42:20.041] Parallelism:    1
I0916 02:42:20.041] Completions:    1
I0916 02:42:20.041] Start Time:     Mon, 16 Sep 2019 02:42:19 +0000
I0916 02:42:20.041] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0916 02:42:20.042] Pod Template:
I0916 02:42:20.042]   Labels:  controller-uid=889a3721-1a27-4444-a9af-507390289a75
I0916 02:42:20.042]            job-name=test-job
I0916 02:42:20.042]            run=pi
I0916 02:42:20.042]   Containers:
I0916 02:42:20.042]    pi:
... skipping 15 lines ...
I0916 02:42:20.043]   Type    Reason            Age   From            Message
I0916 02:42:20.043]   ----    ------            ----  ----            -------
I0916 02:42:20.043]   Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-s66lj
I0916 02:42:20.118] job.batch "test-job" deleted
I0916 02:42:20.199] cronjob.batch "pi" deleted
I0916 02:42:20.276] namespace "test-jobs" deleted
W0916 02:42:20.377] E0916 02:42:18.988825   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:20.378] E0916 02:42:19.084099   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:20.379] E0916 02:42:19.176994   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:20.379] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 02:42:20.379] E0916 02:42:19.269011   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:20.380] I0916 02:42:19.782657   52852 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"889a3721-1a27-4444-a9af-507390289a75", APIVersion:"batch/v1", ResourceVersion:"1395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-s66lj
W0916 02:42:20.380] E0916 02:42:19.990248   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:20.380] E0916 02:42:20.085540   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:20.381] E0916 02:42:20.178311   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:20.381] E0916 02:42:20.270005   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:20.992] E0916 02:42:20.991764   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:21.088] E0916 02:42:21.087459   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:21.180] E0916 02:42:21.179648   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:21.272] E0916 02:42:21.271575   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:21.994] E0916 02:42:21.993300   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:22.089] E0916 02:42:22.089027   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:22.181] E0916 02:42:22.181160   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:22.273] E0916 02:42:22.272862   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:22.995] E0916 02:42:22.994905   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:23.091] E0916 02:42:23.090454   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:23.183] E0916 02:42:23.182720   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:23.275] E0916 02:42:23.274322   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:23.702] I0916 02:42:23.701456   52852 namespace_controller.go:171] Namespace has been deleted test-service-accounts
W0916 02:42:23.997] E0916 02:42:23.996352   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:24.092] E0916 02:42:24.091944   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:24.184] E0916 02:42:24.184166   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:24.276] E0916 02:42:24.275660   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:24.998] E0916 02:42:24.997788   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:25.094] E0916 02:42:25.093490   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:25.186] E0916 02:42:25.185711   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:25.277] E0916 02:42:25.277114   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:25.398] +++ exit code: 0
I0916 02:42:25.430] Recording: run_create_job_tests
I0916 02:42:25.431] Running command: run_create_job_tests
I0916 02:42:25.453] 
I0916 02:42:25.455] +++ Running case: test-cmd.run_create_job_tests 
I0916 02:42:25.458] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 29 lines ...
I0916 02:42:27.053] (Bpodtemplate/nginx created
I0916 02:42:27.154] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 02:42:27.229] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0916 02:42:27.230] nginx   nginx        nginx    name=nginx
W0916 02:42:27.331] I0916 02:42:25.691542   52852 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568601745-26580", Name:"test-job", UID:"05ac1629-9b67-4b43-b643-b349c0268696", APIVersion:"batch/v1", ResourceVersion:"1414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-v465v
W0916 02:42:27.332] I0916 02:42:25.947901   52852 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568601745-26580", Name:"test-job-pi", UID:"435ad11d-bdea-4fd5-bf11-b8464de18c50", APIVersion:"batch/v1", ResourceVersion:"1421", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-kml7b
W0916 02:42:27.332] E0916 02:42:25.999214   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:27.332] E0916 02:42:26.094932   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:27.332] E0916 02:42:26.187184   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:27.333] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 02:42:27.333] E0916 02:42:26.278887   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:27.333] I0916 02:42:26.304569   52852 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568601745-26580", Name:"my-pi", UID:"580b999d-9a18-4109-b764-181338920d34", APIVersion:"batch/v1", ResourceVersion:"1429", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-k9gxv
W0916 02:42:27.333] E0916 02:42:27.000999   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:27.334] I0916 02:42:27.050859   49305 controller.go:606] quota admission added evaluator for: podtemplates
W0916 02:42:27.334] E0916 02:42:27.096707   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:27.334] E0916 02:42:27.188953   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:27.334] E0916 02:42:27.280703   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:27.435] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 02:42:27.479] (Bpodtemplate "nginx" deleted
I0916 02:42:27.574] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:42:27.587] (B+++ exit code: 0
I0916 02:42:27.620] Recording: run_service_tests
I0916 02:42:27.620] Running command: run_service_tests
... skipping 65 lines ...
I0916 02:42:28.455] Port:              <unset>  6379/TCP
I0916 02:42:28.455] TargetPort:        6379/TCP
I0916 02:42:28.456] Endpoints:         <none>
I0916 02:42:28.456] Session Affinity:  None
I0916 02:42:28.456] Events:            <none>
I0916 02:42:28.456] (B
W0916 02:42:28.556] E0916 02:42:28.002324   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:28.557] E0916 02:42:28.098043   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:28.557] E0916 02:42:28.190365   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:28.558] E0916 02:42:28.282451   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:28.658] Successful describe services:
I0916 02:42:28.659] Name:              kubernetes
I0916 02:42:28.659] Namespace:         default
I0916 02:42:28.660] Labels:            component=apiserver
I0916 02:42:28.660]                    provider=kubernetes
I0916 02:42:28.661] Annotations:       <none>
... skipping 178 lines ...
I0916 02:42:29.498]   selector:
I0916 02:42:29.499]     role: padawan
I0916 02:42:29.499]   sessionAffinity: None
I0916 02:42:29.499]   type: ClusterIP
I0916 02:42:29.499] status:
I0916 02:42:29.499]   loadBalancer: {}
W0916 02:42:29.599] E0916 02:42:29.003795   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:29.600] E0916 02:42:29.099246   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:29.601] E0916 02:42:29.191720   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:29.601] E0916 02:42:29.283826   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:29.601] error: you must specify resources by --filename when --local is set.
W0916 02:42:29.601] Example resource specifications include:
W0916 02:42:29.601]    '-f rsrc.yaml'
W0916 02:42:29.601]    '--filename=rsrc.json'
I0916 02:42:29.702] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0916 02:42:29.838] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0916 02:42:29.920] (Bservice "redis-master" deleted
... skipping 8 lines ...
I0916 02:42:30.924] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0916 02:42:31.002] (Bservice "redis-master" deleted
I0916 02:42:31.087] service "service-v1-test" deleted
I0916 02:42:31.181] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 02:42:31.267] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 02:42:31.421] (Bservice/redis-master created
W0916 02:42:31.522] E0916 02:42:30.004903   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:31.523] E0916 02:42:30.100634   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:31.523] E0916 02:42:30.193122   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:31.523] E0916 02:42:30.285246   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:31.523] I0916 02:42:30.373314   52852 namespace_controller.go:171] Namespace has been deleted test-jobs
W0916 02:42:31.523] E0916 02:42:31.005970   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:31.524] E0916 02:42:31.102016   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:31.524] E0916 02:42:31.194459   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:31.524] E0916 02:42:31.286841   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:31.625] service/redis-slave created
I0916 02:42:31.679] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0916 02:42:31.761] (BSuccessful
I0916 02:42:31.762] message:NAME           RSRC
I0916 02:42:31.762] kubernetes     145
I0916 02:42:31.762] redis-master   1463
... skipping 54 lines ...
I0916 02:42:35.019] +++ [0916 02:42:35] Creating namespace namespace-1568601755-18371
I0916 02:42:35.092] namespace/namespace-1568601755-18371 created
I0916 02:42:35.163] Context "test" modified.
I0916 02:42:35.169] +++ [0916 02:42:35] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I0916 02:42:35.256] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:42:35.411] (Bdaemonset.apps/bind created
W0916 02:42:35.512] E0916 02:42:32.007360   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.512] E0916 02:42:32.104332   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.512] E0916 02:42:32.195573   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.513] E0916 02:42:32.288272   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.513] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 02:42:35.513] I0916 02:42:32.708994   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"73d0146f-9eac-4d24-b132-a8441f34b660", APIVersion:"apps/v1", ResourceVersion:"1479", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W0916 02:42:35.514] I0916 02:42:32.713356   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"94800bc1-0894-4e26-b5ea-01be089af44c", APIVersion:"apps/v1", ResourceVersion:"1480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-6rlhk
W0916 02:42:35.514] I0916 02:42:32.716403   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"94800bc1-0894-4e26-b5ea-01be089af44c", APIVersion:"apps/v1", ResourceVersion:"1480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-svg8j
W0916 02:42:35.515] E0916 02:42:33.008835   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.515] E0916 02:42:33.105556   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.515] E0916 02:42:33.197030   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.516] E0916 02:42:33.289436   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.516] I0916 02:42:33.777539   49305 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0916 02:42:35.516] I0916 02:42:33.789134   49305 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0916 02:42:35.516] E0916 02:42:34.010202   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.516] E0916 02:42:34.106997   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.517] E0916 02:42:34.198395   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.517] E0916 02:42:34.290960   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.517] E0916 02:42:35.011650   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.517] E0916 02:42:35.108133   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.517] E0916 02:42:35.199875   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:35.518] E0916 02:42:35.292398   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:35.619] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1568601755-18371"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0916 02:42:35.619]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I0916 02:42:35.620] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I0916 02:42:35.711] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 02:42:35.799] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 02:42:35.952] (Bdaemonset.apps/bind configured
... skipping 18 lines ...
I0916 02:42:36.587] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0916 02:42:36.673] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0916 02:42:36.771] (Bdaemonset.apps/bind rolled back
I0916 02:42:36.867] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 02:42:36.953] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 02:42:37.051] (BSuccessful
I0916 02:42:37.052] message:error: unable to find specified revision 1000000 in history
I0916 02:42:37.052] has:unable to find specified revision
I0916 02:42:37.140] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 02:42:37.227] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 02:42:37.325] (Bdaemonset.apps/bind rolled back
I0916 02:42:37.418] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0916 02:42:37.507] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0916 02:42:38.821] Namespace:    namespace-1568601757-2071
I0916 02:42:38.822] Selector:     app=guestbook,tier=frontend
I0916 02:42:38.822] Labels:       app=guestbook
I0916 02:42:38.822]               tier=frontend
I0916 02:42:38.822] Annotations:  <none>
I0916 02:42:38.822] Replicas:     3 current / 3 desired
I0916 02:42:38.823] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 02:42:38.823] Pod Template:
I0916 02:42:38.823]   Labels:  app=guestbook
I0916 02:42:38.823]            tier=frontend
I0916 02:42:38.823]   Containers:
I0916 02:42:38.823]    php-redis:
I0916 02:42:38.823]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 02:42:38.929] Namespace:    namespace-1568601757-2071
I0916 02:42:38.930] Selector:     app=guestbook,tier=frontend
I0916 02:42:38.930] Labels:       app=guestbook
I0916 02:42:38.930]               tier=frontend
I0916 02:42:38.930] Annotations:  <none>
I0916 02:42:38.930] Replicas:     3 current / 3 desired
I0916 02:42:38.930] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 02:42:38.930] Pod Template:
I0916 02:42:38.930]   Labels:  app=guestbook
I0916 02:42:38.930]            tier=frontend
I0916 02:42:38.930]   Containers:
I0916 02:42:38.930]    php-redis:
I0916 02:42:38.931]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0916 02:42:39.036] Namespace:    namespace-1568601757-2071
I0916 02:42:39.036] Selector:     app=guestbook,tier=frontend
I0916 02:42:39.036] Labels:       app=guestbook
I0916 02:42:39.036]               tier=frontend
I0916 02:42:39.036] Annotations:  <none>
I0916 02:42:39.036] Replicas:     3 current / 3 desired
I0916 02:42:39.036] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 02:42:39.036] Pod Template:
I0916 02:42:39.036]   Labels:  app=guestbook
I0916 02:42:39.036]            tier=frontend
I0916 02:42:39.036]   Containers:
I0916 02:42:39.037]    php-redis:
I0916 02:42:39.037]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 4 lines ...
I0916 02:42:39.037]       memory:  100Mi
I0916 02:42:39.037]     Environment:
I0916 02:42:39.037]       GET_HOSTS_FROM:  dns
I0916 02:42:39.037]     Mounts:            <none>
I0916 02:42:39.037]   Volumes:             <none>
I0916 02:42:39.037] (B
W0916 02:42:39.138] E0916 02:42:36.012927   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.139] E0916 02:42:36.109723   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.139] E0916 02:42:36.201276   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.139] E0916 02:42:36.293797   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.144] E0916 02:42:36.782979   52852 daemon_controller.go:302] namespace-1568601755-18371/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1568601755-18371", SelfLink:"/apis/apps/v1/namespaces/namespace-1568601755-18371/daemonsets/bind", UID:"805d8370-9142-463b-a62c-9d47c7c16cf1", ResourceVersion:"1544", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704198555, loc:(*time.Location)(0x7751f40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1568601755-18371\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0015df420), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001df8428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ddda40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0015df460), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00030f618)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001df847c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0916 02:42:39.144] E0916 02:42:37.014852   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.145] E0916 02:42:37.111359   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.145] E0916 02:42:37.202548   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.145] E0916 02:42:37.295221   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.149] E0916 02:42:37.336404   52852 daemon_controller.go:302] namespace-1568601755-18371/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1568601755-18371", SelfLink:"/apis/apps/v1/namespaces/namespace-1568601755-18371/daemonsets/bind", UID:"805d8370-9142-463b-a62c-9d47c7c16cf1", ResourceVersion:"1547", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704198555, loc:(*time.Location)(0x7751f40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1568601755-18371\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001cdc900), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0018c1c88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022e11a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001cdc920), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000ab59e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0018c1cdc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0916 02:42:39.150] E0916 02:42:38.016355   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.150] E0916 02:42:38.112764   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.150] I0916 02:42:38.164569   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"42faadad-cbe2-4fe8-8222-ae8bf04b9a15", APIVersion:"v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pj2xd
W0916 02:42:39.151] I0916 02:42:38.167741   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"42faadad-cbe2-4fe8-8222-ae8bf04b9a15", APIVersion:"v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gfl9k
W0916 02:42:39.151] I0916 02:42:38.167813   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"42faadad-cbe2-4fe8-8222-ae8bf04b9a15", APIVersion:"v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mpwt9
W0916 02:42:39.151] E0916 02:42:38.204035   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.152] E0916 02:42:38.297171   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.152] I0916 02:42:38.579957   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"bc0c01fa-fff8-44e1-8ad9-b5182a7075d2", APIVersion:"v1", ResourceVersion:"1574", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ht6hw
W0916 02:42:39.152] I0916 02:42:38.582901   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"bc0c01fa-fff8-44e1-8ad9-b5182a7075d2", APIVersion:"v1", ResourceVersion:"1574", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dn8c8
W0916 02:42:39.152] I0916 02:42:38.583284   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"bc0c01fa-fff8-44e1-8ad9-b5182a7075d2", APIVersion:"v1", ResourceVersion:"1574", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bjznd
W0916 02:42:39.153] E0916 02:42:39.017813   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.153] E0916 02:42:39.114194   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.206] E0916 02:42:39.205542   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:39.299] E0916 02:42:39.298535   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:39.399] core.sh:1067: Successful describe
I0916 02:42:39.400] Name:         frontend
I0916 02:42:39.400] Namespace:    namespace-1568601757-2071
I0916 02:42:39.400] Selector:     app=guestbook,tier=frontend
I0916 02:42:39.400] Labels:       app=guestbook
I0916 02:42:39.400]               tier=frontend
I0916 02:42:39.400] Annotations:  <none>
I0916 02:42:39.401] Replicas:     3 current / 3 desired
I0916 02:42:39.401] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 02:42:39.401] Pod Template:
I0916 02:42:39.401]   Labels:  app=guestbook
I0916 02:42:39.406]            tier=frontend
I0916 02:42:39.406]   Containers:
I0916 02:42:39.407]    php-redis:
I0916 02:42:39.407]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0916 02:42:39.412] Namespace:    namespace-1568601757-2071
I0916 02:42:39.412] Selector:     app=guestbook,tier=frontend
I0916 02:42:39.412] Labels:       app=guestbook
I0916 02:42:39.412]               tier=frontend
I0916 02:42:39.412] Annotations:  <none>
I0916 02:42:39.413] Replicas:     3 current / 3 desired
I0916 02:42:39.413] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 02:42:39.413] Pod Template:
I0916 02:42:39.413]   Labels:  app=guestbook
I0916 02:42:39.413]            tier=frontend
I0916 02:42:39.413]   Containers:
I0916 02:42:39.413]    php-redis:
I0916 02:42:39.414]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 02:42:39.416] Namespace:    namespace-1568601757-2071
I0916 02:42:39.417] Selector:     app=guestbook,tier=frontend
I0916 02:42:39.417] Labels:       app=guestbook
I0916 02:42:39.417]               tier=frontend
I0916 02:42:39.417] Annotations:  <none>
I0916 02:42:39.417] Replicas:     3 current / 3 desired
I0916 02:42:39.417] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 02:42:39.417] Pod Template:
I0916 02:42:39.418]   Labels:  app=guestbook
I0916 02:42:39.418]            tier=frontend
I0916 02:42:39.418]   Containers:
I0916 02:42:39.418]    php-redis:
I0916 02:42:39.418]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 02:42:39.501] Namespace:    namespace-1568601757-2071
I0916 02:42:39.501] Selector:     app=guestbook,tier=frontend
I0916 02:42:39.501] Labels:       app=guestbook
I0916 02:42:39.501]               tier=frontend
I0916 02:42:39.501] Annotations:  <none>
I0916 02:42:39.501] Replicas:     3 current / 3 desired
I0916 02:42:39.502] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 02:42:39.502] Pod Template:
I0916 02:42:39.502]   Labels:  app=guestbook
I0916 02:42:39.502]            tier=frontend
I0916 02:42:39.502]   Containers:
I0916 02:42:39.502]    php-redis:
I0916 02:42:39.502]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0916 02:42:39.621] Namespace:    namespace-1568601757-2071
I0916 02:42:39.621] Selector:     app=guestbook,tier=frontend
I0916 02:42:39.621] Labels:       app=guestbook
I0916 02:42:39.621]               tier=frontend
I0916 02:42:39.621] Annotations:  <none>
I0916 02:42:39.622] Replicas:     3 current / 3 desired
I0916 02:42:39.622] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 02:42:39.622] Pod Template:
I0916 02:42:39.622]   Labels:  app=guestbook
I0916 02:42:39.622]            tier=frontend
I0916 02:42:39.622]   Containers:
I0916 02:42:39.622]    php-redis:
I0916 02:42:39.622]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0916 02:42:40.424] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0916 02:42:40.509] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0916 02:42:40.588] (Breplicationcontroller/frontend scaled
I0916 02:42:40.681] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0916 02:42:40.757] (Breplicationcontroller "frontend" deleted
W0916 02:42:40.858] I0916 02:42:39.812708   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"bc0c01fa-fff8-44e1-8ad9-b5182a7075d2", APIVersion:"v1", ResourceVersion:"1584", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-ht6hw
W0916 02:42:40.859] E0916 02:42:40.019446   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:40.859] error: Expected replicas to be 3, was 2
W0916 02:42:40.859] E0916 02:42:40.115573   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:40.859] E0916 02:42:40.207356   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:40.859] E0916 02:42:40.300243   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:40.860] I0916 02:42:40.332704   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"bc0c01fa-fff8-44e1-8ad9-b5182a7075d2", APIVersion:"v1", ResourceVersion:"1590", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-np8wj
W0916 02:42:40.860] I0916 02:42:40.593250   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"bc0c01fa-fff8-44e1-8ad9-b5182a7075d2", APIVersion:"v1", ResourceVersion:"1595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-np8wj
W0916 02:42:40.939] I0916 02:42:40.939085   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"redis-master", UID:"0fe757e0-b5de-4c67-8c40-abefbbcd369a", APIVersion:"v1", ResourceVersion:"1607", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-9jkr8
W0916 02:42:41.021] E0916 02:42:41.020962   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:41.098] I0916 02:42:41.097448   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"redis-slave", UID:"23bcdf29-b71e-4ea5-bf1d-36ab3d9fe896", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-qg48h
W0916 02:42:41.102] I0916 02:42:41.101368   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"redis-slave", UID:"23bcdf29-b71e-4ea5-bf1d-36ab3d9fe896", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-rrqlq
W0916 02:42:41.117] E0916 02:42:41.116881   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:41.187] I0916 02:42:41.186938   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"redis-master", UID:"0fe757e0-b5de-4c67-8c40-abefbbcd369a", APIVersion:"v1", ResourceVersion:"1619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-cq557
W0916 02:42:41.190] I0916 02:42:41.189373   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"redis-master", UID:"0fe757e0-b5de-4c67-8c40-abefbbcd369a", APIVersion:"v1", ResourceVersion:"1619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-55nlg
W0916 02:42:41.191] I0916 02:42:41.190970   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"redis-master", UID:"0fe757e0-b5de-4c67-8c40-abefbbcd369a", APIVersion:"v1", ResourceVersion:"1619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-j4gmf
W0916 02:42:41.193] I0916 02:42:41.193314   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"redis-slave", UID:"23bcdf29-b71e-4ea5-bf1d-36ab3d9fe896", APIVersion:"v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-9nn2v
W0916 02:42:41.197] I0916 02:42:41.196376   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"redis-slave", UID:"23bcdf29-b71e-4ea5-bf1d-36ab3d9fe896", APIVersion:"v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-6kszl
W0916 02:42:41.208] E0916 02:42:41.208132   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:41.302] E0916 02:42:41.301739   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 02:42:41.403] replicationcontroller/redis-master created
I0916 02:42:41.403] replicationcontroller/redis-slave created
I0916 02:42:41.403] replicationcontroller/redis-master scaled
I0916 02:42:41.403] replicationcontroller/redis-slave scaled
I0916 02:42:41.404] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
I0916 02:42:41.404] (Bcore.sh:1118: Successful get rc redis-slave {{.spec.replicas}}: 4
... skipping 5 lines ...
I0916 02:42:41.889] (Bdeployment.apps "nginx-deployment" deleted
I0916 02:42:41.989] Successful
I0916 02:42:41.989] message:service/expose-test-deployment exposed
I0916 02:42:41.989] has:service/expose-test-deployment exposed
I0916 02:42:42.068] service "expose-test-deployment" deleted
I0916 02:42:42.157] Successful
I0916 02:42:42.157] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0916 02:42:42.158] See 'kubectl expose -h' for help and examples
I0916 02:42:42.158] has:invalid deployment: no selectors
W0916 02:42:42.259] I0916 02:42:41.624001   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment", UID:"b89a962b-76f4-4e3b-89d4-7d853bbb368a", APIVersion:"apps/v1", ResourceVersion:"1654", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0916 02:42:42.259] I0916 02:42:41.627853   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment-6986c7bc94", UID:"66194115-b81d-4ecd-8756-5a517bcbfe74", APIVersion:"apps/v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-dqrmj
W0916 02:42:42.260] I0916 02:42:41.631747   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment-6986c7bc94", UID:"66194115-b81d-4ecd-8756-5a517bcbfe74", APIVersion:"apps/v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-2p2wp
W0916 02:42:42.260] I0916 02:42:41.631803   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment-6986c7bc94", UID:"66194115-b81d-4ecd-8756-5a517bcbfe74", APIVersion:"apps/v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-xl5sd
W0916 02:42:42.260] I0916 02:42:41.724005   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment", UID:"b89a962b-76f4-4e3b-89d4-7d853bbb368a", APIVersion:"apps/v1", ResourceVersion:"1668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W0916 02:42:42.261] I0916 02:42:41.729828   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment-6986c7bc94", UID:"66194115-b81d-4ecd-8756-5a517bcbfe74", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-dqrmj
W0916 02:42:42.261] I0916 02:42:41.729955   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment-6986c7bc94", UID:"66194115-b81d-4ecd-8756-5a517bcbfe74", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-xl5sd
W0916 02:42:42.261] E0916 02:42:42.022274   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:42.262] E0916 02:42:42.118724   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:42.262] E0916 02:42:42.209492   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:42.303] E0916 02:42:42.303081   52852 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 02:42:42.321] I0916 02:42:42.320312   52852 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment", UID:"89591cdc-55b4-4276-893d-2202cd1cc947", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0916 02:42:42.324] I0916 02:42:42.323374   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment-6986c7bc94", UID:"4e10c69f-cf73-497d-92d8-ceb5fef90e37", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-42kxb
W0916 02:42:42.327] I0916 02:42:42.326418   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment-6986c7bc94", UID:"4e10c69f-cf73-497d-92d8-ceb5fef90e37", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-7ndc9
W0916 02:42:42.327] I0916 02:42:42.326460   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568601757-2071", Name:"nginx-deployment-6986c7bc94", UID:"4e10c69f-cf73-497d-92d8-ceb5fef90e37", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-4l26x
I0916 02:42:42.428] deployment.apps/nginx-deployment created
I0916 02:42:42.428] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
... skipping 18 lines ...
I0916 02:42:44.211] service "frontend" deleted
I0916 02:42:44.218] service "frontend-2" deleted
I0916 02:42:44.225] service "frontend-3" deleted
I0916 02:42:44.233] service "frontend-4" deleted
I0916 02:42:44.242] service "frontend-5" deleted
I0916 02:42:44.336] Successful
I0916 02:42:44.336] message:error: cannot expose a Node
I0916 02:42:44.337] has:cannot expose
I0916 02:42:44.422] Successful
I0916 02:42:44.423] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0916 02:42:44.423] has:metadata.name: Invalid value
I0916 02:42:44.513] Successful
I0916 02:42:44.514] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 10 lines ...
I0916 02:42:45.219] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:42:45.305] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 02:42:45.451] (Breplicationcontroller/frontend created
W0916 02:42:45.552] I0916 02:42:42.846001   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"22a3fdf2-4427-4747-af4c-99faef32126c", APIVersion:"v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zpjkf
W0916 02:42:45.553] I0916 02:42:42.849035   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"22a3fdf2-4427-4747-af4c-99faef32126c", APIVersion:"v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pj6nc
W0916 02:42:45.554] I0916 02:42:42.849241   52852 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568601757-2071", Name:"frontend", UID:"22a3fdf2-4427-4747-af4c-99faef32126c", APIVersion:"v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6n6k6
W0916 02:42:45.554] E0916 02:42:43.023777   52852 reflector.go:123] k8s.io/client