This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdamemi: Move PodPriorityResolution e2e to integration
ResultFAILURE
Tests 1 failed / 2837 succeeded
Started2019-08-27 14:32
Elapsed29m54s
Revision
Buildergke-prow-ssd-pool-1a225945-66t6
Refs master:bcb464db
80824:54369530
pod29d5986c-c8d7-11e9-a97d-2ebde778232b
infra-commit78cf77118
pod29d5986c-c8d7-11e9-a97d-2ebde778232b
repok8s.io/kubernetes
repo-commite20a99f74a85a0e329dcfa189de5de4dc6b19eeb
repos{u'k8s.io/kubernetes': u'master:bcb464db7ba2478429f4deff2c682f4efe46855e,80824:543695309b3c29c7f086988a457b007c29dcf3b6'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPodPriorityResolution 4.74s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPodPriorityResolution$
=== RUN   TestPodPriorityResolution
I0827 14:58:37.302320  108645 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0827 14:58:37.302350  108645 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0827 14:58:37.302363  108645 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0827 14:58:37.302373  108645 master.go:234] Using reconciler: 
I0827 14:58:37.304515  108645 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.304686  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.304720  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.304864  108645 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0827 14:58:37.304894  108645 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.305106  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.305130  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.305220  108645 store.go:1342] Monitoring events count at <storage-prefix>//events
I0827 14:58:37.305248  108645 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.305340  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.305362  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.305425  108645 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0827 14:58:37.305449  108645 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.305553  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.305573  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.305657  108645 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0827 14:58:37.305784  108645 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.305809  108645 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0827 14:58:37.305868  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.305888  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.305954  108645 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0827 14:58:37.305957  108645 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0827 14:58:37.305980  108645 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0827 14:58:37.306176  108645 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0827 14:58:37.306290  108645 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0827 14:58:37.306510  108645 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.306646  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.306670  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.306975  108645 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0827 14:58:37.307038  108645 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0827 14:58:37.307173  108645 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.308082  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.308138  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.308244  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.308333  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.308374  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.308983  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.309461  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.309669  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.309833  108645 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0827 14:58:37.309883  108645 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0827 14:58:37.309976  108645 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.310115  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.310144  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.310260  108645 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0827 14:58:37.310355  108645 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0827 14:58:37.310439  108645 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.310570  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.310596  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.310862  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.311360  108645 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0827 14:58:37.311510  108645 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.311588  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.311627  108645 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0827 14:58:37.311684  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.311702  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.311815  108645 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0827 14:58:37.311878  108645 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0827 14:58:37.311945  108645 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.312083  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.312176  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.312411  108645 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0827 14:58:37.312626  108645 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0827 14:58:37.312692  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.312852  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.312943  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.312966  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.313203  108645 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0827 14:58:37.313273  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.313333  108645 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0827 14:58:37.313354  108645 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.313457  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.313475  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.313569  108645 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0827 14:58:37.313704  108645 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.313828  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.313840  108645 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0827 14:58:37.316639  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.316678  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.317048  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.317078  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.317332  108645 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0827 14:58:37.317365  108645 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.317431  108645 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0827 14:58:37.317729  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.317746  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.317871  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.317883  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.318066  108645 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.318174  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.318190  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.318459  108645 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0827 14:58:37.318587  108645 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0827 14:58:37.318876  108645 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.319071  108645 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.319631  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.319687  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.320158  108645 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.321255  108645 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.322006  108645 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.322626  108645 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.323139  108645 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.323256  108645 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.323487  108645 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.324164  108645 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.324978  108645 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.325270  108645 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.326224  108645 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.326518  108645 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.327075  108645 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.327373  108645 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.328037  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.328245  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.328357  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.328459  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.328615  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.328733  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.328978  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.329756  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.330078  108645 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.330988  108645 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.331761  108645 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.332011  108645 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.332336  108645 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.333255  108645 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.333519  108645 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.334319  108645 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.335058  108645 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.335650  108645 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.336379  108645 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.336620  108645 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.336727  108645 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0827 14:58:37.336752  108645 master.go:434] Enabling API group "authentication.k8s.io".
I0827 14:58:37.336770  108645 master.go:434] Enabling API group "authorization.k8s.io".
I0827 14:58:37.336939  108645 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.337141  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.337173  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.337429  108645 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0827 14:58:37.337593  108645 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.337602  108645 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0827 14:58:37.337710  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.337731  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.337826  108645 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0827 14:58:37.337946  108645 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0827 14:58:37.338028  108645 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.338130  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.338152  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.338260  108645 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0827 14:58:37.338280  108645 master.go:434] Enabling API group "autoscaling".
I0827 14:58:37.338361  108645 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0827 14:58:37.338388  108645 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.338463  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.338483  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.338603  108645 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0827 14:58:37.338649  108645 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0827 14:58:37.338718  108645 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.338826  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.338846  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.338973  108645 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0827 14:58:37.338992  108645 master.go:434] Enabling API group "batch".
I0827 14:58:37.339133  108645 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.339208  108645 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0827 14:58:37.339216  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.339303  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.339435  108645 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0827 14:58:37.339465  108645 master.go:434] Enabling API group "certificates.k8s.io".
I0827 14:58:37.339529  108645 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0827 14:58:37.339618  108645 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.339761  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.339786  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.339862  108645 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0827 14:58:37.339958  108645 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.340076  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.340104  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.340176  108645 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0827 14:58:37.340198  108645 master.go:434] Enabling API group "coordination.k8s.io".
I0827 14:58:37.340297  108645 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.340378  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.340394  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.340496  108645 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0827 14:58:37.340518  108645 master.go:434] Enabling API group "extensions".
I0827 14:58:37.340608  108645 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.340686  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.340697  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.340758  108645 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0827 14:58:37.340894  108645 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.340986  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.341004  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.341423  108645 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0827 14:58:37.341457  108645 master.go:434] Enabling API group "networking.k8s.io".
I0827 14:58:37.341487  108645 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.341596  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.341626  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.341712  108645 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0827 14:58:37.341733  108645 master.go:434] Enabling API group "node.k8s.io".
I0827 14:58:37.341851  108645 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.341983  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.342009  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.344097  108645 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0827 14:58:37.342156  108645 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0827 14:58:37.344253  108645 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.342184  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.342365  108645 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0827 14:58:37.344398  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.344415  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.344519  108645 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0827 14:58:37.342425  108645 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0827 14:58:37.342659  108645 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0827 14:58:37.342812  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.342972  108645 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0827 14:58:37.343181  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.343243  108645 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0827 14:58:37.343362  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.344536  108645 master.go:434] Enabling API group "policy".
I0827 14:58:37.344982  108645 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.345114  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.345131  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.344601  108645 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0827 14:58:37.344619  108645 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0827 14:58:37.345351  108645 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0827 14:58:37.345411  108645 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0827 14:58:37.345463  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.345488  108645 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.345624  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.345641  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.345718  108645 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0827 14:58:37.345743  108645 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.345807  108645 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0827 14:58:37.345826  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.345841  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.345920  108645 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0827 14:58:37.346066  108645 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.346090  108645 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0827 14:58:37.346186  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.346204  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.346292  108645 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0827 14:58:37.346326  108645 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0827 14:58:37.346329  108645 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.346446  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.346461  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.346538  108645 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0827 14:58:37.346664  108645 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0827 14:58:37.346661  108645 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.346773  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.346788  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.346857  108645 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0827 14:58:37.346878  108645 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.346905  108645 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0827 14:58:37.346955  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.346968  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.347810  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.348256  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.348968  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.349131  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.349153  108645 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0827 14:58:37.349276  108645 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.349373  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.349388  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.349457  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.349477  108645 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0827 14:58:37.349499  108645 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0827 14:58:37.349752  108645 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0827 14:58:37.349795  108645 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0827 14:58:37.351886  108645 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.352065  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.352085  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.352301  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.352425  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.352445  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.352584  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.352936  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.353068  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.353358  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.353475  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.353482  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.353737  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.353829  108645 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0827 14:58:37.353973  108645 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.354051  108645 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0827 14:58:37.354106  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.354122  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.354206  108645 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0827 14:58:37.354218  108645 master.go:434] Enabling API group "scheduling.k8s.io".
I0827 14:58:37.354298  108645 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0827 14:58:37.354345  108645 master.go:423] Skipping disabled API group "settings.k8s.io".
I0827 14:58:37.354470  108645 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.354594  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.354621  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.354723  108645 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0827 14:58:37.354842  108645 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.354936  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.354955  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.355069  108645 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0827 14:58:37.355103  108645 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.355212  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.355230  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.355279  108645 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0827 14:58:37.355306  108645 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0827 14:58:37.355333  108645 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.355441  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.355458  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.355534  108645 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0827 14:58:37.355672  108645 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0827 14:58:37.355677  108645 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.355801  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.355817  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.355890  108645 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0827 14:58:37.356010  108645 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.356133  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.356146  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.356167  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.356243  108645 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0827 14:58:37.356260  108645 master.go:434] Enabling API group "storage.k8s.io".
I0827 14:58:37.356648  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.357454  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.357585  108645 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0827 14:58:37.357800  108645 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0827 14:58:37.358109  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.358443  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.358452  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.358663  108645 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0827 14:58:37.358744  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.359064  108645 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0827 14:58:37.359595  108645 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.359755  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.359777  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.359919  108645 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0827 14:58:37.360083  108645 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.360143  108645 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0827 14:58:37.360201  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.360218  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.360737  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.360800  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.361311  108645 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0827 14:58:37.361422  108645 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0827 14:58:37.361507  108645 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.361654  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.361675  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.361807  108645 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0827 14:58:37.361933  108645 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0827 14:58:37.361932  108645 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.362053  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.362072  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.362190  108645 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0827 14:58:37.362292  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.362317  108645 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.362443  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.362467  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.362477  108645 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0827 14:58:37.362473  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.362600  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.362629  108645 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0827 14:58:37.362649  108645 master.go:434] Enabling API group "apps".
I0827 14:58:37.362683  108645 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0827 14:58:37.362682  108645 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.362798  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.362814  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.362912  108645 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0827 14:58:37.362940  108645 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.363008  108645 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0827 14:58:37.363310  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.363331  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.363407  108645 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0827 14:58:37.363434  108645 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.363525  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.363541  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.363759  108645 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0827 14:58:37.364675  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.365113  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.365201  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.365218  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.366772  108645 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0827 14:58:37.366841  108645 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0827 14:58:37.367067  108645 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.367238  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.367261  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.367608  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.367608  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.368483  108645 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0827 14:58:37.368512  108645 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0827 14:58:37.368562  108645 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0827 14:58:37.368787  108645 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.369303  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:37.369344  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:37.369569  108645 store.go:1342] Monitoring events count at <storage-prefix>//events
I0827 14:58:37.369592  108645 master.go:434] Enabling API group "events.k8s.io".
I0827 14:58:37.369794  108645 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0827 14:58:37.369810  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.369819  108645 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.370199  108645 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.370682  108645 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.370767  108645 watch_cache.go:405] Replace watchCache (rev: 35205) 
I0827 14:58:37.371089  108645 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.371392  108645 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.371519  108645 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.371756  108645 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.371863  108645 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.372184  108645 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.372311  108645 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.373581  108645 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.373875  108645 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.374766  108645 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.375179  108645 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.376188  108645 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.376625  108645 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.377531  108645 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.377786  108645 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.378613  108645 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.379005  108645 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 14:58:37.379183  108645 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0827 14:58:37.379988  108645 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.380368  108645 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.380758  108645 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.381985  108645 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.382899  108645 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.384083  108645 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.384643  108645 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.385774  108645 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.386729  108645 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.387246  108645 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.388325  108645 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 14:58:37.388519  108645 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0827 14:58:37.389825  108645 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.390484  108645 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.391611  108645 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.392968  108645 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.394065  108645 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.395492  108645 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.396583  108645 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.398168  108645 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.398721  108645 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.399200  108645 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.399682  108645 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 14:58:37.399740  108645 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0827 14:58:37.400292  108645 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.400788  108645 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 14:58:37.400842  108645 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0827 14:58:37.401447  108645 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.401986  108645 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.402554  108645 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.403183  108645 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.403673  108645 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.404312  108645 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.404833  108645 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 14:58:37.404905  108645 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0827 14:58:37.405579  108645 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.406160  108645 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.406422  108645 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.407145  108645 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.407397  108645 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.407614  108645 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.408175  108645 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.408421  108645 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.408641  108645 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.409279  108645 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.409601  108645 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.409917  108645 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 14:58:37.409975  108645 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0827 14:58:37.409981  108645 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0827 14:58:37.410506  108645 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.411060  108645 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.411564  108645 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.412201  108645 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.412849  108645 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"880be24f-c026-404b-bd18-8d4dfe942954", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 14:58:37.415471  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.415497  108645 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0827 14:58:37.415508  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.415519  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.415529  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.415537  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.415577  108645 httplog.go:90] GET /healthz: (229.339µs) 0 [Go-http-client/1.1 127.0.0.1:42514]
I0827 14:58:37.416871  108645 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.61952ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.419835  108645 httplog.go:90] GET /api/v1/services: (1.28747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.424226  108645 httplog.go:90] GET /api/v1/services: (1.166874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.426238  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.426271  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.426283  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.426294  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.426302  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.426329  108645 httplog.go:90] GET /healthz: (229.644µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.427769  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.601277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42514]
I0827 14:58:37.428825  108645 httplog.go:90] GET /api/v1/services: (1.073564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:37.428865  108645 httplog.go:90] GET /api/v1/services: (1.547623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.429786  108645 httplog.go:90] POST /api/v1/namespaces: (1.598813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42514]
I0827 14:58:37.431263  108645 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.048929ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42514]
I0827 14:58:37.433618  108645 httplog.go:90] POST /api/v1/namespaces: (1.920025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.435006  108645 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.043044ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.437004  108645 httplog.go:90] POST /api/v1/namespaces: (1.52204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.516342  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.516399  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.516416  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.516425  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.516435  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.516524  108645 httplog.go:90] GET /healthz: (385.506µs) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:37.527604  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.527652  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.527664  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.527674  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.527681  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.527710  108645 httplog.go:90] GET /healthz: (246.747µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.616386  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.616428  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.616441  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.616451  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.616466  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.616501  108645 httplog.go:90] GET /healthz: (268.288µs) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:37.628137  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.628170  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.628184  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.628194  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.628202  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.628247  108645 httplog.go:90] GET /healthz: (279.41µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.716341  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.716378  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.716390  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.716400  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.716408  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.716440  108645 httplog.go:90] GET /healthz: (247.845µs) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:37.727566  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.727635  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.727648  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.727657  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.727665  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.727729  108645 httplog.go:90] GET /healthz: (262.753µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.816416  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.816453  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.816466  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.816476  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.816484  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.816528  108645 httplog.go:90] GET /healthz: (258.251µs) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:37.827567  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.827737  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.827766  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.827789  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.827810  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.827904  108645 httplog.go:90] GET /healthz: (458.413µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:37.916298  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.916573  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.916675  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.916730  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.916760  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.917675  108645 httplog.go:90] GET /healthz: (1.521595ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:37.927652  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:37.927682  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:37.927692  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:37.927699  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:37.927704  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:37.927739  108645 httplog.go:90] GET /healthz: (236.424µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.016279  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:38.016315  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.016327  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:38.016337  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:38.016344  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:38.016379  108645 httplog.go:90] GET /healthz: (243.988µs) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:38.027495  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:38.027536  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.027546  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:38.027555  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:38.027561  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:38.027583  108645 httplog.go:90] GET /healthz: (199.373µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.116368  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:38.116400  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.116414  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:38.116425  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:38.116434  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:38.116487  108645 httplog.go:90] GET /healthz: (309.203µs) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:38.127563  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:38.127594  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.127610  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:38.127617  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:38.127623  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:38.127648  108645 httplog.go:90] GET /healthz: (208.666µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.216283  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:38.216315  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.216331  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:38.216347  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:38.216355  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:38.216396  108645 httplog.go:90] GET /healthz: (280.448µs) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:38.227642  108645 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 14:58:38.227672  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.227681  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:38.227687  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:38.227693  108645 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:38.227715  108645 httplog.go:90] GET /healthz: (200.844µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.303509  108645 client.go:361] parsed scheme: "endpoint"
I0827 14:58:38.303596  108645 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 14:58:38.320572  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.320606  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:38.320617  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:38.320625  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:38.320672  108645 httplog.go:90] GET /healthz: (1.136192ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:38.328693  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.328722  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:38.328733  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:38.328742  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:38.328786  108645 httplog.go:90] GET /healthz: (1.325695ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.417418  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.012578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.417418  108645 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.458396ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42950]
I0827 14:58:38.417689  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.077755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.418147  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.418174  108645 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 14:58:38.418184  108645 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 14:58:38.418192  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 14:58:38.418222  108645 httplog.go:90] GET /healthz: (1.152428ms) 0 [Go-http-client/1.1 127.0.0.1:42952]
I0827 14:58:38.419081  108645 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (993.157µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.419201  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.49549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42950]
I0827 14:58:38.420171  108645 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.331887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.420365  108645 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0827 14:58:38.420495  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (929.932µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42952]
I0827 14:58:38.420793  108645 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.330182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.421434  108645 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (900.718µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.422111  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.094048ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42952]
I0827 14:58:38.423263  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (760.957µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42952]
I0827 14:58:38.423455  108645 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.717214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.423588  108645 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0827 14:58:38.423603  108645 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0827 14:58:38.424553  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.024055ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42952]
I0827 14:58:38.425780  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (877.698µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.427744  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (771.062µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.428039  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.428062  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.428108  108645 httplog.go:90] GET /healthz: (817.027µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.429077  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.016874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.430299  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (819.337µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.432098  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.351744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.432301  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0827 14:58:38.433327  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (742.765µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.437709  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.621523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.438040  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0827 14:58:38.439336  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.039649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.441665  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.847622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.441878  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0827 14:58:38.442963  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (789.971µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.446660  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.315653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.446986  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0827 14:58:38.448173  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (956.739µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.449909  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.362687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.450137  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0827 14:58:38.451141  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (823.583µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.452771  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.453061  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0827 14:58:38.454123  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (767.089µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.455529  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.091299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.455697  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0827 14:58:38.456750  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (901.676µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.462064  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.397582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.462536  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0827 14:58:38.465679  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.925612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.468105  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.821181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.468403  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0827 14:58:38.469540  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (960.37µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.471605  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.667214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.471919  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0827 14:58:38.472929  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (760.924µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.474641  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.269102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.474895  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0827 14:58:38.475874  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (782.661µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.477787  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.448376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.478081  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0827 14:58:38.479198  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (933.602µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.481284  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.691759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.481660  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0827 14:58:38.482558  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (700.247µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.485579  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.398977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.485806  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0827 14:58:38.487293  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.222846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.488907  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.152737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.489076  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0827 14:58:38.490156  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (845.441µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.492011  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.579362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.492209  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0827 14:58:38.493180  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (840.956µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.494849  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.339305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.495162  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0827 14:58:38.496192  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (831.507µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.497845  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.262469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.498138  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0827 14:58:38.499141  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (786.81µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.500981  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.370788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.501218  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0827 14:58:38.502258  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (844.75µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.504115  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.397818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.504287  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0827 14:58:38.505240  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (787.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.507156  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.511188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.507376  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0827 14:58:38.508371  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (788.918µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.510422  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.646361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.510656  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0827 14:58:38.512137  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.050145ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.514069  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.536682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.514256  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0827 14:58:38.515334  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (928.341µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.517215  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.517275  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.517307  108645 httplog.go:90] GET /healthz: (1.296027ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:38.517482  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.797918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.517695  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0827 14:58:38.518704  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (849.126µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.520479  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.457203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.520656  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0827 14:58:38.521676  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (855.235µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.523599  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.466694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.523902  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0827 14:58:38.525686  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.49242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.527577  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.337343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.527824  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0827 14:58:38.528633  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.528823  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.529034  108645 httplog.go:90] GET /healthz: (1.664108ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.528762  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (690.62µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.531315  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.608652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.531655  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0827 14:58:38.532910  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (860.893µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.535112  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.702257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.535305  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0827 14:58:38.536454  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (911.875µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.539253  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.416764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.539478  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0827 14:58:38.540538  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (890.457µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.542362  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.426937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.542611  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0827 14:58:38.544072  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.279279ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.545983  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.48384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.546219  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0827 14:58:38.547286  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (871.539µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.549007  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.323926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.549343  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0827 14:58:38.550392  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (884.695µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.552233  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.425381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.552554  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0827 14:58:38.553737  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (929.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.556510  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.360821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.558592  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0827 14:58:38.565897  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (6.453641ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.568433  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.653619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.568640  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0827 14:58:38.570182  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.364793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.571970  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.334139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.572240  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0827 14:58:38.573592  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.084306ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.575634  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.564405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.575926  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0827 14:58:38.577034  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (842.34µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.578899  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.334033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.579145  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0827 14:58:38.580006  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (645.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.581885  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.414743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.582250  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0827 14:58:38.583265  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (816.018µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.586323  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.512539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.586632  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0827 14:58:38.587823  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (940.649µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.589953  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.488237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.590176  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0827 14:58:38.591095  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (736.082µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.592816  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.289652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.593172  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0827 14:58:38.594237  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (760.931µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.595827  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.243445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.596105  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0827 14:58:38.597114  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (740.476µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.598818  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.22139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.598954  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0827 14:58:38.599969  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (777.012µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.601903  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.392737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.602254  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0827 14:58:38.603680  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.157135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.605904  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.660652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.606296  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0827 14:58:38.607547  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (989.316µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.609399  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.480317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.609577  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0827 14:58:38.610538  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (786.378µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.612452  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.380288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.612876  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0827 14:58:38.613963  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (763.526µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.616818  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.616945  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.617126  108645 httplog.go:90] GET /healthz: (1.112915ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:38.617282  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.740757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.617490  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0827 14:58:38.618405  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (719.608µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.628538  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.628618  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.628670  108645 httplog.go:90] GET /healthz: (1.233872ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.638097  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.271065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.638415  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0827 14:58:38.665379  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (9.625204ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.678009  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.230668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.678295  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0827 14:58:38.697085  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.343138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.717857  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.718001  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.718170  108645 httplog.go:90] GET /healthz: (1.308266ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:38.719258  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.428186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.719520  108645 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0827 14:58:38.728577  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.728612  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.728672  108645 httplog.go:90] GET /healthz: (1.204228ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.737197  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.45176ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.758229  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.441901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.759110  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0827 14:58:38.777316  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.564845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.797853  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.126642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.798140  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0827 14:58:38.816906  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.816927  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.129151ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:38.816943  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.816984  108645 httplog.go:90] GET /healthz: (815.019µs) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:38.828508  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.828546  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.828678  108645 httplog.go:90] GET /healthz: (1.072461ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.837800  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.098609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.838110  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0827 14:58:38.857609  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.815067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.878344  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.471597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.878721  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0827 14:58:38.897541  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.779039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.918254  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.918290  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.918333  108645 httplog.go:90] GET /healthz: (1.727771ms) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:38.918573  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.796485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.918886  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0827 14:58:38.928564  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:38.928801  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:38.928965  108645 httplog.go:90] GET /healthz: (1.492598ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.937074  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.382443ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.958167  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.197091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.958434  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0827 14:58:38.977511  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.789556ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.997891  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.159389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:38.998169  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0827 14:58:39.017171  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.017200  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.017238  108645 httplog.go:90] GET /healthz: (1.115136ms) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:39.017784  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.981134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.028414  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.028443  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.028487  108645 httplog.go:90] GET /healthz: (1.026013ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.037647  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.94111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.037881  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0827 14:58:39.056957  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.161944ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.077972  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.145686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.078226  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0827 14:58:39.097006  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.212261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.116930  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.116953  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.116982  108645 httplog.go:90] GET /healthz: (788.347µs) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:39.119607  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.320386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.119791  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0827 14:58:39.129614  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.129652  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.129701  108645 httplog.go:90] GET /healthz: (2.110233ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.137009  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.307838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.159869  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.082784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.160165  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0827 14:58:39.177093  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.36829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.197777  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.986215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.197979  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0827 14:58:39.216831  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.097757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:39.217113  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.217139  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.217192  108645 httplog.go:90] GET /healthz: (1.079818ms) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:39.228407  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.228444  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.228495  108645 httplog.go:90] GET /healthz: (1.033084ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.238000  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.230668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.238308  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0827 14:58:39.257496  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.521836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.277853  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.148563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.278128  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0827 14:58:39.296966  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.275326ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.317279  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.317311  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.317345  108645 httplog.go:90] GET /healthz: (1.180337ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:39.317666  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.830736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.317904  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0827 14:58:39.328530  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.328565  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.328619  108645 httplog.go:90] GET /healthz: (1.148615ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.337080  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.392279ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.358131  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.271337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.358801  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0827 14:58:39.377594  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.43765ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.397923  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.097422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.398392  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0827 14:58:39.416937  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.416977  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.417039  108645 httplog.go:90] GET /healthz: (891.665µs) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:39.417114  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.38866ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.428500  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.428532  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.428571  108645 httplog.go:90] GET /healthz: (1.079253ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.440098  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.209109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.440367  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0827 14:58:39.457109  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.327418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.477511  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.76103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.477950  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0827 14:58:39.497292  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.555977ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.517436  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.517469  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.517512  108645 httplog.go:90] GET /healthz: (1.393979ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:39.517899  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.984298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.518203  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0827 14:58:39.528491  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.528742  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.529062  108645 httplog.go:90] GET /healthz: (1.531338ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.537195  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.477209ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.559723  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.86467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.560311  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0827 14:58:39.577001  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.245798ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.598214  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.131632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.598676  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0827 14:58:39.617198  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.617228  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.617252  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.579854ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.617308  108645 httplog.go:90] GET /healthz: (1.15813ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:39.628543  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.628574  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.628609  108645 httplog.go:90] GET /healthz: (1.028624ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.638505  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.726258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.638764  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0827 14:58:39.657197  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.408508ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.678215  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.375225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.678649  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0827 14:58:39.697097  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.308255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.717736  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.717769  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.717809  108645 httplog.go:90] GET /healthz: (1.638578ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:39.717977  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.190665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.718197  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0827 14:58:39.728492  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.728523  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.728558  108645 httplog.go:90] GET /healthz: (1.081441ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.738437  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.455076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.759740  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.836023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.760362  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0827 14:58:39.777195  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.449953ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.797963  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.202224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.798241  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0827 14:58:39.817295  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.817336  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.817386  108645 httplog.go:90] GET /healthz: (1.317696ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:39.817808  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.095816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.828239  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.828272  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.828313  108645 httplog.go:90] GET /healthz: (871.177µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.838409  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.624623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.838654  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0827 14:58:39.857531  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.714457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.877928  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.049985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.878294  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0827 14:58:39.897173  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.419907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.922180  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.922216  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.922251  108645 httplog.go:90] GET /healthz: (1.301122ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:39.923247  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.575945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.923641  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0827 14:58:39.928457  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:39.928485  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:39.928520  108645 httplog.go:90] GET /healthz: (1.094811ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.937268  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.52879ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.957977  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.288159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.958263  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0827 14:58:39.976987  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.272073ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.998501  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.74049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:39.998832  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0827 14:58:40.017141  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.017413  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.017478  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.655248ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.017517  108645 httplog.go:90] GET /healthz: (1.466208ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:40.028408  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.028440  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.028479  108645 httplog.go:90] GET /healthz: (1.004339ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.037757  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.993031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.038078  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0827 14:58:40.059380  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (3.560343ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.077811  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.115403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.078051  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0827 14:58:40.097355  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.572165ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.117482  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.117515  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.117547  108645 httplog.go:90] GET /healthz: (1.087478ms) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:40.118234  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.106055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.118557  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0827 14:58:40.128635  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.128664  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.128703  108645 httplog.go:90] GET /healthz: (1.195246ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.137138  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.38965ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.157746  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.970686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.158053  108645 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0827 14:58:40.176974  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.250224ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.178766  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.305867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.197926  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.188161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.198216  108645 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0827 14:58:40.220217  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.220245  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.220283  108645 httplog.go:90] GET /healthz: (1.071433ms) 0 [Go-http-client/1.1 127.0.0.1:42516]
I0827 14:58:40.221822  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.914703ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.224259  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.98911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.229103  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.229281  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.229510  108645 httplog.go:90] GET /healthz: (1.52553ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.237939  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.215381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.238221  108645 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0827 14:58:40.258356  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.552066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.262464  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.5564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.277699  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.991251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.277981  108645 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0827 14:58:40.299521  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (3.69863ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.301904  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.631327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.317049  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.317083  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.317125  108645 httplog.go:90] GET /healthz: (1.025906ms) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:40.317734  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.987506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.317926  108645 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0827 14:58:40.328882  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.328906  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.328936  108645 httplog.go:90] GET /healthz: (1.50887ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.336661  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.035289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.339604  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.177705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.360752  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (5.057207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.361666  108645 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0827 14:58:40.377946  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.238746ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.379810  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.254653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.399489  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.673695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.399752  108645 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0827 14:58:40.417158  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.417188  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.417197  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.478167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.417214  108645 httplog.go:90] GET /healthz: (1.14419ms) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:40.419253  108645 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.489021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.428429  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.428459  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.428538  108645 httplog.go:90] GET /healthz: (1.015379ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.439587  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.033014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.441934  108645 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0827 14:58:40.461085  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (5.266953ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.462882  108645 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.190386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.477846  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.087592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.478162  108645 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0827 14:58:40.496933  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.163827ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.500204  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.731517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.516733  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.516770  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.516803  108645 httplog.go:90] GET /healthz: (782.707µs) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:40.517703  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.954687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.518166  108645 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0827 14:58:40.529444  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.529473  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.529520  108645 httplog.go:90] GET /healthz: (1.960653ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.537134  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.380895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.541114  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.550868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.559593  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.813874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.559851  108645 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0827 14:58:40.577793  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (2.066554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.579792  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.452352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.597802  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.043752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.598117  108645 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0827 14:58:40.617218  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.617251  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.617299  108645 httplog.go:90] GET /healthz: (1.297342ms) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:40.617472  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.777137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.619133  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.256356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.628537  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.628568  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.628614  108645 httplog.go:90] GET /healthz: (1.139163ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.640351  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.708144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.640646  108645 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0827 14:58:40.657150  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.368904ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.659781  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.494769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.678074  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.236062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.678367  108645 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0827 14:58:40.697083  108645 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.270976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.698721  108645 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.169435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.717131  108645 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 14:58:40.717427  108645 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 14:58:40.717655  108645 httplog.go:90] GET /healthz: (1.555267ms) 0 [Go-http-client/1.1 127.0.0.1:42518]
I0827 14:58:40.718242  108645 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.444288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.718443  108645 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0827 14:58:40.732060  108645 httplog.go:90] GET /healthz: (4.610988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.733766  108645 httplog.go:90] GET /api/v1/namespaces/default: (1.383007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.736096  108645 httplog.go:90] POST /api/v1/namespaces: (1.80895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.737779  108645 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.22341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.741528  108645 httplog.go:90] POST /api/v1/namespaces/default/services: (3.259337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.742769  108645 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (878.353µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.744645  108645 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.473006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.817520  108645 httplog.go:90] GET /healthz: (1.295805ms) 200 [Go-http-client/1.1 127.0.0.1:42518]
W0827 14:58:40.818347  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818403  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818417  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818440  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818451  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818466  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818480  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818492  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818502  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818513  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 14:58:40.818568  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0827 14:58:40.818585  108645 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0827 14:58:40.818602  108645 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0827 14:58:40.818800  108645 shared_informer.go:197] Waiting for caches to sync for scheduler
I0827 14:58:40.819055  108645 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:230
I0827 14:58:40.819072  108645 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:230
I0827 14:58:40.820008  108645 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (593.869µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:40.820828  108645 get.go:250] Starting watch for /api/v1/pods, rv=35205 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m4s
I0827 14:58:40.924319  108645 shared_informer.go:227] caches populated
I0827 14:58:40.924354  108645 shared_informer.go:204] Caches are synced for scheduler 
I0827 14:58:40.924704  108645 reflector.go:120] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.924723  108645 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.925130  108645 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.925154  108645 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.925536  108645 reflector.go:120] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.925548  108645 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.925833  108645 reflector.go:120] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.925843  108645 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.926143  108645 reflector.go:120] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.926155  108645 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.926447  108645 reflector.go:120] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.926458  108645 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.926734  108645 reflector.go:120] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.926743  108645 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.927036  108645 reflector.go:120] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.927049  108645 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.927340  108645 reflector.go:120] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.927350  108645 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.927650  108645 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.927675  108645 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0827 14:58:40.929550  108645 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (588.608µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:40.929782  108645 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (628.956µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0827 14:58:40.929931  108645 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (538.02µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0827 14:58:40.930386  108645 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (468.745µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0827 14:58:40.930618  108645 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (470.607µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43752]
I0827 14:58:40.930935  108645 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (444.219µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43760]
I0827 14:58:40.931164  108645 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (430.964µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43754]
I0827 14:58:40.931473  108645 get.go:250] Starting watch for /api/v1/services, rv=35713 labels= fields= timeout=7m45s
I0827 14:58:40.931490  108645 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (436.985µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43762]
I0827 14:58:40.931489  108645 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=35205 labels= fields= timeout=6m49s
I0827 14:58:40.931915  108645 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (2.731983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43756]
I0827 14:58:40.932064  108645 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (389.487µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0827 14:58:40.932308  108645 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=35205 labels= fields= timeout=9m57s
I0827 14:58:40.932391  108645 get.go:250] Starting watch for /api/v1/nodes, rv=35205 labels= fields= timeout=6m3s
I0827 14:58:40.932617  108645 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=35205 labels= fields= timeout=9m35s
I0827 14:58:40.933439  108645 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=35205 labels= fields= timeout=8m29s
I0827 14:58:40.933566  108645 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=35205 labels= fields= timeout=9m23s
I0827 14:58:40.933690  108645 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=35205 labels= fields= timeout=5m52s
I0827 14:58:40.933698  108645 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=35205 labels= fields= timeout=8m8s
I0827 14:58:40.933799  108645 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=35205 labels= fields= timeout=5m55s
I0827 14:58:41.024650  108645 shared_informer.go:227] caches populated
I0827 14:58:41.124805  108645 shared_informer.go:227] caches populated
I0827 14:58:41.225210  108645 shared_informer.go:227] caches populated
I0827 14:58:41.325422  108645 shared_informer.go:227] caches populated
I0827 14:58:41.425645  108645 shared_informer.go:227] caches populated
I0827 14:58:41.525883  108645 shared_informer.go:227] caches populated
I0827 14:58:41.626092  108645 shared_informer.go:227] caches populated
I0827 14:58:41.728273  108645 shared_informer.go:227] caches populated
I0827 14:58:41.828519  108645 shared_informer.go:227] caches populated
I0827 14:58:41.928708  108645 shared_informer.go:227] caches populated
I0827 14:58:41.930753  108645 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 14:58:41.930974  108645 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 14:58:41.931266  108645 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 14:58:41.932247  108645 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 14:58:41.933202  108645 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 14:58:41.933400  108645 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 14:58:42.028983  108645 shared_informer.go:227] caches populated
W0827 14:58:42.029463  108645 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0827 14:58:42.032522  108645 httplog.go:90] POST /api/v1/nodes: (2.700091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0827 14:58:42.032642  108645 node_tree.go:93] Added node "node1" in group "" to NodeTree
I0827 14:58:42.033452  108645 httplog.go:90] POST /api/v1/namespaces/kube-system/pods: (549.375µs) 403 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0827 14:58:42.034208  108645 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=35205&timeout=6m49s&timeoutSeconds=409&watch=true: (1.10303456s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42516]
I0827 14:58:42.034228  108645 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=35713&timeout=7m45s&timeoutSeconds=465&watch=true: (1.103005711s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0827 14:58:42.034305  108645 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=35205&timeout=9m23s&timeoutSeconds=563&watch=true: (1.10097228s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43754]
I0827 14:58:42.034337  108645 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=35205&timeout=8m29s&timeoutSeconds=509&watch=true: (1.101148626s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0827 14:58:42.034378  108645 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=35205&timeout=8m8s&timeoutSeconds=488&watch=true: (1.10105218s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43766]
I0827 14:58:42.034411  108645 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=35205&timeout=5m52s&timeoutSeconds=352&watch=true: (1.101214281s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43756]
I0827 14:58:42.034421  108645 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=35205&timeout=5m55s&timeoutSeconds=355&watch=true: (1.100926275s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43772]
E0827 14:58:42.034492  108645 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0827 14:58:42.034491  108645 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=35205&timeout=6m3s&timeoutSeconds=363&watch=true: (1.102336807s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43752]
I0827 14:58:42.034524  108645 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=35205&timeoutSeconds=424&watch=true: (1.214073155s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42518]
I0827 14:58:42.034603  108645 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=35205&timeout=9m57s&timeoutSeconds=597&watch=true: (1.102551818s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43760]
I0827 14:58:42.034603  108645 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=35205&timeout=9m35s&timeoutSeconds=575&watch=true: (1.102327253s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43770]
I0827 14:58:42.038138  108645 httplog.go:90] DELETE /api/v1/nodes: (4.335144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0827 14:58:42.038333  108645 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0827 14:58:42.039747  108645 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.121977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0827 14:58:42.041581  108645 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.495144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
--- FAIL: TestPodPriorityResolution (4.74s)
    preemption_test.go:430: Test [PodPriority/system-node-critical]: Error running pause pod: Error creating pause pod: pods "pod1-system-node-critical" is forbidden: no PriorityClass with name system-node-critical was found

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190827-145023.xml

Find from mentions in log files | View test history on testgrid


Show 2837 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 549 lines ...
W0827 14:45:10.840] I0827 14:45:10.663721   52919 controllermanager.go:532] Started "statefulset"
W0827 14:45:10.840] I0827 14:45:10.664553   52919 controllermanager.go:532] Started "persistentvolume-expander"
W0827 14:45:10.840] I0827 14:45:10.665556   52919 stateful_set.go:145] Starting stateful set controller
W0827 14:45:10.840] I0827 14:45:10.665596   52919 shared_informer.go:197] Waiting for caches to sync for stateful set
W0827 14:45:10.840] I0827 14:45:10.666401   52919 expand_controller.go:300] Starting expand controller
W0827 14:45:10.840] I0827 14:45:10.666444   52919 shared_informer.go:197] Waiting for caches to sync for expand
W0827 14:45:10.840] E0827 14:45:10.668515   52919 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0827 14:45:10.841] W0827 14:45:10.668564   52919 controllermanager.go:524] Skipping "service"
W0827 14:45:10.841] I0827 14:45:10.669078   52919 controllermanager.go:532] Started "clusterrole-aggregation"
W0827 14:45:10.841] I0827 14:45:10.669529   52919 controllermanager.go:532] Started "pvc-protection"
W0827 14:45:10.841] W0827 14:45:10.669549   52919 controllermanager.go:524] Skipping "ttl-after-finished"
W0827 14:45:10.841] I0827 14:45:10.670035   52919 controllermanager.go:532] Started "podgc"
W0827 14:45:10.841] I0827 14:45:10.670352   52919 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
... skipping 42 lines ...
W0827 14:45:10.883] I0827 14:45:10.879609   52919 controllermanager.go:532] Started "disruption"
W0827 14:45:10.883] I0827 14:45:10.879885   52919 core.go:185] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0827 14:45:10.883] W0827 14:45:10.880180   52919 controllermanager.go:524] Skipping "route"
W0827 14:45:10.884] I0827 14:45:10.880830   52919 node_lifecycle_controller.go:77] Sending events to api server
W0827 14:45:10.884] I0827 14:45:10.879767   52919 disruption.go:333] Starting disruption controller
W0827 14:45:10.884] I0827 14:45:10.878501   52919 deployment_controller.go:152] Starting deployment controller
W0827 14:45:10.884] E0827 14:45:10.881122   52919 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0827 14:45:10.884] W0827 14:45:10.881297   52919 controllermanager.go:524] Skipping "cloud-node-lifecycle"
W0827 14:45:10.884] I0827 14:45:10.881780   52919 controllermanager.go:532] Started "cronjob"
W0827 14:45:10.885] I0827 14:45:10.882228   52919 controllermanager.go:532] Started "ttl"
W0827 14:45:10.885] W0827 14:45:10.882249   52919 controllermanager.go:511] "bootstrapsigner" is disabled
W0827 14:45:10.885] I0827 14:45:10.882976   52919 controllermanager.go:532] Started "horizontalpodautoscaling"
W0827 14:45:10.885] W0827 14:45:10.883035   52919 controllermanager.go:524] Skipping "nodeipam"
... skipping 33 lines ...
W0827 14:45:11.572] I0827 14:45:11.290790   52919 controllermanager.go:532] Started "garbagecollector"
W0827 14:45:11.572] I0827 14:45:11.290874   52919 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0827 14:45:11.572] I0827 14:45:11.290917   52919 graph_builder.go:282] GraphBuilder running
W0827 14:45:11.573] I0827 14:45:11.291916   52919 controllermanager.go:532] Started "daemonset"
W0827 14:45:11.573] I0827 14:45:11.292685   52919 daemon_controller.go:267] Starting daemon sets controller
W0827 14:45:11.573] I0827 14:45:11.292752   52919 shared_informer.go:197] Waiting for caches to sync for daemon sets
W0827 14:45:11.573] W0827 14:45:11.324523   52919 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0827 14:45:11.573] I0827 14:45:11.359660   52919 shared_informer.go:204] Caches are synced for namespace 
W0827 14:45:11.573] I0827 14:45:11.362700   52919 shared_informer.go:204] Caches are synced for certificate 
W0827 14:45:11.574] I0827 14:45:11.362705   52919 shared_informer.go:204] Caches are synced for PV protection 
W0827 14:45:11.574] I0827 14:45:11.362729   52919 shared_informer.go:204] Caches are synced for service account 
W0827 14:45:11.574] I0827 14:45:11.363395   52919 shared_informer.go:204] Caches are synced for ReplicaSet 
W0827 14:45:11.574] I0827 14:45:11.366095   49414 controller.go:606] quota admission added evaluator for: serviceaccounts
... skipping 83 lines ...
I0827 14:45:15.064] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:45:15.066] +++ command: run_RESTMapper_evaluation_tests
I0827 14:45:15.079] +++ [0827 14:45:15] Creating namespace namespace-1566917115-32073
I0827 14:45:15.153] namespace/namespace-1566917115-32073 created
I0827 14:45:15.226] Context "test" modified.
I0827 14:45:15.233] +++ [0827 14:45:15] Testing RESTMapper
I0827 14:45:15.338] +++ [0827 14:45:15] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0827 14:45:15.352] +++ exit code: 0
I0827 14:45:15.478] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0827 14:45:15.480] bindings                                                                      true         Binding
I0827 14:45:15.482] componentstatuses                 cs                                          false        ComponentStatus
I0827 14:45:15.484] configmaps                        cm                                          true         ConfigMap
I0827 14:45:15.485] endpoints                         ep                                          true         Endpoints
... skipping 664 lines ...
I0827 14:45:36.951] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0827 14:45:37.040] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0827 14:45:37.113] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0827 14:45:37.205] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0827 14:45:37.390] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:45:37.581] (Bpod/env-test-pod created
W0827 14:45:37.681] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0827 14:45:37.682] error: setting 'all' parameter but found a non empty selector. 
W0827 14:45:37.682] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0827 14:45:37.683] I0827 14:45:36.586103   49414 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0827 14:45:37.683] error: min-available and max-unavailable cannot be both specified
I0827 14:45:37.783] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0827 14:45:37.784] Name:         env-test-pod
I0827 14:45:37.784] Namespace:    test-kubectl-describe-pod
I0827 14:45:37.785] Priority:     0
I0827 14:45:37.785] Node:         <none>
I0827 14:45:37.785] Labels:       <none>
... skipping 173 lines ...
I0827 14:45:51.976] (Bpod/valid-pod patched
I0827 14:45:52.080] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0827 14:45:52.159] (Bpod/valid-pod patched
I0827 14:45:52.257] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0827 14:45:52.460] (Bpod/valid-pod patched
I0827 14:45:52.587] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0827 14:45:52.780] (B+++ [0827 14:45:52] "kubectl patch with resourceVersion 500" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0827 14:45:53.022] pod "valid-pod" deleted
I0827 14:45:53.035] pod/valid-pod replaced
I0827 14:45:53.140] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0827 14:45:53.306] (BSuccessful
I0827 14:45:53.306] message:error: --grace-period must have --force specified
I0827 14:45:53.306] has:\-\-grace-period must have \-\-force specified
I0827 14:45:53.466] Successful
I0827 14:45:53.466] message:error: --timeout must have --force specified
I0827 14:45:53.467] has:\-\-timeout must have \-\-force specified
W0827 14:45:53.653] W0827 14:45:53.652935   52919 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0827 14:45:53.755] node/node-v1-test created
I0827 14:45:53.819] node/node-v1-test replaced
I0827 14:45:53.931] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0827 14:45:54.021] (Bnode "node-v1-test" deleted
I0827 14:45:54.152] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0827 14:45:54.487] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 46 lines ...
I0827 14:45:57.508] core.sh:628: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:45:57.678] (Bpod/test-pod created
W0827 14:45:57.779] Edit cancelled, no changes made.
W0827 14:45:57.779] Edit cancelled, no changes made.
W0827 14:45:57.780] Edit cancelled, no changes made.
W0827 14:45:57.780] Edit cancelled, no changes made.
W0827 14:45:57.780] error: 'name' already has a value (valid-pod), and --overwrite is false
W0827 14:45:57.780] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0827 14:45:57.880] core.sh:632: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0827 14:45:58.122] (Bpod/test-pod replaced
I0827 14:45:58.220] core.sh:640: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-replaced
I0827 14:45:58.514] (Bpod/test-pod configured
W0827 14:45:58.615] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
... skipping 62 lines ...
I0827 14:46:05.000] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0827 14:46:05.003] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:46:05.006] +++ command: run_kubectl_create_error_tests
I0827 14:46:05.021] +++ [0827 14:46:05] Creating namespace namespace-1566917165-20470
I0827 14:46:05.125] namespace/namespace-1566917165-20470 created
I0827 14:46:05.222] Context "test" modified.
I0827 14:46:05.229] +++ [0827 14:46:05] Testing kubectl create with error
W0827 14:46:05.330] Error: must specify one of -f and -k
W0827 14:46:05.330] 
W0827 14:46:05.330] Create a resource from a file or from stdin.
W0827 14:46:05.330] 
W0827 14:46:05.330]  JSON and YAML formats are accepted.
W0827 14:46:05.331] 
W0827 14:46:05.331] Examples:
... skipping 41 lines ...
W0827 14:46:05.339] 
W0827 14:46:05.339] Usage:
W0827 14:46:05.339]   kubectl create -f FILENAME [options]
W0827 14:46:05.340] 
W0827 14:46:05.340] Use "kubectl <command> --help" for more information about a given command.
W0827 14:46:05.340] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0827 14:46:05.553] +++ [0827 14:46:05] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0827 14:46:05.671] kubectl convert is DEPRECATED and will be removed in a future version.
W0827 14:46:05.671] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0827 14:46:05.812] +++ exit code: 0
I0827 14:46:05.851] Recording: run_kubectl_apply_tests
I0827 14:46:05.851] Running command: run_kubectl_apply_tests
I0827 14:46:05.876] 
... skipping 17 lines ...
I0827 14:46:07.961] (Bpod "test-pod" deleted
I0827 14:46:08.255] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0827 14:46:08.575] I0827 14:46:08.574737   49414 client.go:361] parsed scheme: "endpoint"
W0827 14:46:08.576] I0827 14:46:08.575126   49414 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0827 14:46:08.581] I0827 14:46:08.580802   49414 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0827 14:46:08.682] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0827 14:46:08.783] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0827 14:46:08.884] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0827 14:46:08.884] +++ exit code: 0
I0827 14:46:08.892] Recording: run_kubectl_run_tests
I0827 14:46:08.893] Running command: run_kubectl_run_tests
I0827 14:46:08.917] 
I0827 14:46:08.919] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 96 lines ...
I0827 14:46:12.252] Context "test" modified.
I0827 14:46:12.260] +++ [0827 14:46:12] Testing kubectl create filter
I0827 14:46:12.397] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:46:12.679] (Bpod/selector-test-pod created
I0827 14:46:12.824] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0827 14:46:12.938] (BSuccessful
I0827 14:46:12.939] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0827 14:46:12.939] has:pods "selector-test-pod-dont-apply" not found
I0827 14:46:13.053] pod "selector-test-pod" deleted
I0827 14:46:13.079] +++ exit code: 0
I0827 14:46:13.113] Recording: run_kubectl_apply_deployments_tests
I0827 14:46:13.114] Running command: run_kubectl_apply_deployments_tests
I0827 14:46:13.138] 
... skipping 29 lines ...
W0827 14:46:16.185] I0827 14:46:16.090059   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917173-7754", Name:"nginx", UID:"180cf8ef-9077-4408-871d-e84610268437", APIVersion:"apps/v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0827 14:46:16.186] I0827 14:46:16.096278   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917173-7754", Name:"nginx-7dbc4d9f", UID:"cdd39f80-b44e-4192-82b0-1db66e14ab11", APIVersion:"apps/v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-8trk9
W0827 14:46:16.186] I0827 14:46:16.105112   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917173-7754", Name:"nginx-7dbc4d9f", UID:"cdd39f80-b44e-4192-82b0-1db66e14ab11", APIVersion:"apps/v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-prvhm
W0827 14:46:16.187] I0827 14:46:16.118742   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917173-7754", Name:"nginx-7dbc4d9f", UID:"cdd39f80-b44e-4192-82b0-1db66e14ab11", APIVersion:"apps/v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-gnjtd
I0827 14:46:16.287] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0827 14:46:20.518] (BSuccessful
I0827 14:46:20.519] message:Error from server (Conflict): error when applying patch:
I0827 14:46:20.520] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1566917173-7754\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0827 14:46:20.520] to:
I0827 14:46:20.520] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0827 14:46:20.521] Name: "nginx", Namespace: "namespace-1566917173-7754"
I0827 14:46:20.525] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1566917173-7754\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-27T14:46:16Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-27T14:46:16Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-27T14:46:16Z"]] "name":"nginx" "namespace":"namespace-1566917173-7754" "resourceVersion":"598" "selfLink":"/apis/apps/v1/namespaces/namespace-1566917173-7754/deployments/nginx" "uid":"180cf8ef-9077-4408-871d-e84610268437"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-27T14:46:16Z" "lastUpdateTime":"2019-08-27T14:46:16Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-27T14:46:16Z" "lastUpdateTime":"2019-08-27T14:46:16Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0827 14:46:20.526] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0827 14:46:20.526] has:Error from server (Conflict)
W0827 14:46:20.627] I0827 14:46:19.137900   52919 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1566917161-7387
I0827 14:46:25.864] deployment.apps/nginx configured
I0827 14:46:25.962] Successful
I0827 14:46:25.962] message:        "name": "nginx2"
I0827 14:46:25.962]           "name": "nginx2"
I0827 14:46:25.963] has:"name": "nginx2"
... skipping 169 lines ...
I0827 14:46:33.217] +++ [0827 14:46:33] Creating namespace namespace-1566917193-18113
I0827 14:46:33.293] namespace/namespace-1566917193-18113 created
I0827 14:46:33.369] Context "test" modified.
I0827 14:46:33.377] +++ [0827 14:46:33] Testing kubectl get
I0827 14:46:33.470] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:46:33.559] (BSuccessful
I0827 14:46:33.559] message:Error from server (NotFound): pods "abc" not found
I0827 14:46:33.560] has:pods "abc" not found
I0827 14:46:33.669] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:46:33.754] (BSuccessful
I0827 14:46:33.755] message:Error from server (NotFound): pods "abc" not found
I0827 14:46:33.755] has:pods "abc" not found
I0827 14:46:33.843] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:46:33.927] (BSuccessful
I0827 14:46:33.927] message:{
I0827 14:46:33.927]     "apiVersion": "v1",
I0827 14:46:33.927]     "items": [],
... skipping 23 lines ...
I0827 14:46:34.280] has not:No resources found
I0827 14:46:34.382] Successful
I0827 14:46:34.382] message:NAME
I0827 14:46:34.383] has not:No resources found
I0827 14:46:34.471] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:46:34.568] (BSuccessful
I0827 14:46:34.569] message:error: the server doesn't have a resource type "foobar"
I0827 14:46:34.569] has not:No resources found
I0827 14:46:34.660] Successful
I0827 14:46:34.660] message:No resources found in namespace-1566917193-18113 namespace.
I0827 14:46:34.660] has:No resources found
I0827 14:46:34.748] Successful
I0827 14:46:34.748] message:
I0827 14:46:34.748] has not:No resources found
I0827 14:46:34.834] Successful
I0827 14:46:34.834] message:No resources found in namespace-1566917193-18113 namespace.
I0827 14:46:34.834] has:No resources found
I0827 14:46:34.920] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:46:35.002] (BSuccessful
I0827 14:46:35.003] message:Error from server (NotFound): pods "abc" not found
I0827 14:46:35.003] has:pods "abc" not found
I0827 14:46:35.004] FAIL!
I0827 14:46:35.005] message:Error from server (NotFound): pods "abc" not found
I0827 14:46:35.005] has not:List
I0827 14:46:35.006] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0827 14:46:35.108] Successful
I0827 14:46:35.109] message:I0827 14:46:35.067380   62830 loader.go:375] Config loaded from file:  /tmp/tmp.K9mQLxsNtY/.kube/config
I0827 14:46:35.109] I0827 14:46:35.069197   62830 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0827 14:46:35.109] I0827 14:46:35.087540   62830 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0827 14:46:40.619] Successful
I0827 14:46:40.620] message:NAME    DATA   AGE
I0827 14:46:40.620] one     0      0s
I0827 14:46:40.620] three   0      0s
I0827 14:46:40.620] two     0      0s
I0827 14:46:40.621] STATUS    REASON          MESSAGE
I0827 14:46:40.621] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0827 14:46:40.621] has not:watch is only supported on individual resources
I0827 14:46:41.707] Successful
I0827 14:46:41.708] message:STATUS    REASON          MESSAGE
I0827 14:46:41.708] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0827 14:46:41.708] has not:watch is only supported on individual resources
I0827 14:46:41.712] +++ [0827 14:46:41] Creating namespace namespace-1566917201-12832
I0827 14:46:41.785] namespace/namespace-1566917201-12832 created
I0827 14:46:41.852] Context "test" modified.
I0827 14:46:41.941] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:46:42.095] (Bpod/valid-pod created
... skipping 104 lines ...
I0827 14:46:42.197] }
I0827 14:46:42.280] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0827 14:46:42.535] (B<no value>Successful
I0827 14:46:42.535] message:valid-pod:
I0827 14:46:42.536] has:valid-pod:
I0827 14:46:42.624] Successful
I0827 14:46:42.628] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0827 14:46:42.628] 	template was:
I0827 14:46:42.628] 		{.missing}
I0827 14:46:42.628] 	object given to jsonpath engine was:
I0827 14:46:42.630] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-27T14:46:42Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-27T14:46:42Z"}}, "name":"valid-pod", "namespace":"namespace-1566917201-12832", "resourceVersion":"699", "selfLink":"/api/v1/namespaces/namespace-1566917201-12832/pods/valid-pod", "uid":"47977246-b410-40f0-bb5e-a80adfb0cd6d"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0827 14:46:42.630] has:missing is not found
I0827 14:46:42.707] Successful
I0827 14:46:42.707] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0827 14:46:42.707] 	template was:
I0827 14:46:42.707] 		{{.missing}}
I0827 14:46:42.707] 	raw data was:
I0827 14:46:42.708] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-27T14:46:42Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-27T14:46:42Z"}],"name":"valid-pod","namespace":"namespace-1566917201-12832","resourceVersion":"699","selfLink":"/api/v1/namespaces/namespace-1566917201-12832/pods/valid-pod","uid":"47977246-b410-40f0-bb5e-a80adfb0cd6d"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0827 14:46:42.708] 	object given to template engine was:
I0827 14:46:42.710] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-27T14:46:42Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-27T14:46:42Z]] name:valid-pod namespace:namespace-1566917201-12832 resourceVersion:699 selfLink:/api/v1/namespaces/namespace-1566917201-12832/pods/valid-pod uid:47977246-b410-40f0-bb5e-a80adfb0cd6d] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0827 14:46:42.710] has:map has no entry for key "missing"
W0827 14:46:42.810] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0827 14:46:43.798] Successful
I0827 14:46:43.799] message:NAME        READY   STATUS    RESTARTS   AGE
I0827 14:46:43.799] valid-pod   0/1     Pending   0          0s
I0827 14:46:43.799] STATUS      REASON          MESSAGE
I0827 14:46:43.799] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0827 14:46:43.800] has:STATUS
I0827 14:46:43.801] Successful
I0827 14:46:43.801] message:NAME        READY   STATUS    RESTARTS   AGE
I0827 14:46:43.801] valid-pod   0/1     Pending   0          0s
I0827 14:46:43.801] STATUS      REASON          MESSAGE
I0827 14:46:43.802] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0827 14:46:43.802] has:valid-pod
I0827 14:46:44.881] Successful
I0827 14:46:44.881] message:pod/valid-pod
I0827 14:46:44.881] has not:STATUS
I0827 14:46:44.883] Successful
I0827 14:46:44.884] message:pod/valid-pod
... skipping 144 lines ...
I0827 14:46:45.997] status:
I0827 14:46:45.997]   phase: Pending
I0827 14:46:45.997]   qosClass: Guaranteed
I0827 14:46:45.997] ---
I0827 14:46:45.998] has:name: valid-pod
I0827 14:46:46.053] Successful
I0827 14:46:46.054] message:Error from server (NotFound): pods "invalid-pod" not found
I0827 14:46:46.054] has:"invalid-pod" not found
I0827 14:46:46.132] pod "valid-pod" deleted
I0827 14:46:46.224] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:46:46.381] (Bpod/redis-master created
I0827 14:46:46.386] pod/valid-pod created
I0827 14:46:46.475] Successful
... skipping 35 lines ...
I0827 14:46:47.574] +++ command: run_kubectl_exec_pod_tests
I0827 14:46:47.584] +++ [0827 14:46:47] Creating namespace namespace-1566917207-17124
I0827 14:46:47.655] namespace/namespace-1566917207-17124 created
I0827 14:46:47.725] Context "test" modified.
I0827 14:46:47.731] +++ [0827 14:46:47] Testing kubectl exec POD COMMAND
I0827 14:46:47.810] Successful
I0827 14:46:47.811] message:Error from server (NotFound): pods "abc" not found
I0827 14:46:47.811] has:pods "abc" not found
I0827 14:46:47.960] pod/test-pod created
I0827 14:46:48.057] Successful
I0827 14:46:48.057] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0827 14:46:48.058] has not:pods "test-pod" not found
I0827 14:46:48.059] Successful
I0827 14:46:48.059] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0827 14:46:48.059] has not:pod or type/name must be specified
I0827 14:46:48.138] pod "test-pod" deleted
I0827 14:46:48.156] +++ exit code: 0
I0827 14:46:48.188] Recording: run_kubectl_exec_resource_name_tests
I0827 14:46:48.189] Running command: run_kubectl_exec_resource_name_tests
I0827 14:46:48.210] 
... skipping 2 lines ...
I0827 14:46:48.219] +++ command: run_kubectl_exec_resource_name_tests
I0827 14:46:48.229] +++ [0827 14:46:48] Creating namespace namespace-1566917208-6890
I0827 14:46:48.302] namespace/namespace-1566917208-6890 created
I0827 14:46:48.374] Context "test" modified.
I0827 14:46:48.381] +++ [0827 14:46:48] Testing kubectl exec TYPE/NAME COMMAND
I0827 14:46:48.492] Successful
I0827 14:46:48.493] message:error: the server doesn't have a resource type "foo"
I0827 14:46:48.493] has:error:
I0827 14:46:48.578] Successful
I0827 14:46:48.578] message:Error from server (NotFound): deployments.apps "bar" not found
I0827 14:46:48.579] has:"bar" not found
I0827 14:46:48.727] pod/test-pod created
I0827 14:46:48.908] replicaset.apps/frontend created
W0827 14:46:49.009] I0827 14:46:48.912417   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917208-6890", Name:"frontend", UID:"a9e123e7-c759-404d-b561-f4fe4d370ef4", APIVersion:"apps/v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tzhd5
W0827 14:46:49.009] I0827 14:46:48.917256   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917208-6890", Name:"frontend", UID:"a9e123e7-c759-404d-b561-f4fe4d370ef4", APIVersion:"apps/v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mrz8g
W0827 14:46:49.010] I0827 14:46:48.917801   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917208-6890", Name:"frontend", UID:"a9e123e7-c759-404d-b561-f4fe4d370ef4", APIVersion:"apps/v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gndhl
I0827 14:46:49.110] configmap/test-set-env-config created
I0827 14:46:49.169] Successful
I0827 14:46:49.169] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0827 14:46:49.169] has:not implemented
I0827 14:46:49.265] Successful
I0827 14:46:49.266] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0827 14:46:49.266] has not:not found
I0827 14:46:49.268] Successful
I0827 14:46:49.268] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0827 14:46:49.268] has not:pod or type/name must be specified
I0827 14:46:49.368] Successful
I0827 14:46:49.369] message:Error from server (BadRequest): pod frontend-gndhl does not have a host assigned
I0827 14:46:49.369] has not:not found
I0827 14:46:49.371] Successful
I0827 14:46:49.371] message:Error from server (BadRequest): pod frontend-gndhl does not have a host assigned
I0827 14:46:49.371] has not:pod or type/name must be specified
I0827 14:46:49.449] pod "test-pod" deleted
I0827 14:46:49.529] replicaset.apps "frontend" deleted
I0827 14:46:49.637] configmap "test-set-env-config" deleted
I0827 14:46:49.658] +++ exit code: 0
I0827 14:46:49.689] Recording: run_create_secret_tests
I0827 14:46:49.690] Running command: run_create_secret_tests
I0827 14:46:49.716] 
I0827 14:46:49.719] +++ Running case: test-cmd.run_create_secret_tests 
I0827 14:46:49.722] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:46:49.724] +++ command: run_create_secret_tests
I0827 14:46:49.819] Successful
I0827 14:46:49.820] message:Error from server (NotFound): secrets "mysecret" not found
I0827 14:46:49.820] has:secrets "mysecret" not found
I0827 14:46:49.982] Successful
I0827 14:46:49.982] message:Error from server (NotFound): secrets "mysecret" not found
I0827 14:46:49.982] has:secrets "mysecret" not found
I0827 14:46:49.985] Successful
I0827 14:46:49.985] message:user-specified
I0827 14:46:49.985] has:user-specified
I0827 14:46:50.081] Successful
I0827 14:46:50.159] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"fd083e3e-2874-4f25-aae9-1c3a574db25a","resourceVersion":"774","creationTimestamp":"2019-08-27T14:46:50Z"}}
... skipping 2 lines ...
I0827 14:46:50.330] has:uid
I0827 14:46:50.415] Successful
I0827 14:46:50.416] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"fd083e3e-2874-4f25-aae9-1c3a574db25a","resourceVersion":"775","creationTimestamp":"2019-08-27T14:46:50Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-27T14:46:50Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0827 14:46:50.416] has:config1
I0827 14:46:50.487] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"fd083e3e-2874-4f25-aae9-1c3a574db25a"}}
I0827 14:46:50.574] Successful
I0827 14:46:50.575] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0827 14:46:50.575] has:configmaps "tester-update-cm" not found
I0827 14:46:50.587] +++ exit code: 0
I0827 14:46:50.618] Recording: run_kubectl_create_kustomization_directory_tests
I0827 14:46:50.618] Running command: run_kubectl_create_kustomization_directory_tests
I0827 14:46:50.639] 
I0827 14:46:50.642] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
I0827 14:46:53.360] valid-pod   0/1     Pending   0          0s
I0827 14:46:53.360] has:valid-pod
I0827 14:46:54.445] Successful
I0827 14:46:54.446] message:NAME        READY   STATUS    RESTARTS   AGE
I0827 14:46:54.446] valid-pod   0/1     Pending   0          0s
I0827 14:46:54.446] STATUS      REASON          MESSAGE
I0827 14:46:54.446] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0827 14:46:54.446] has:Timeout exceeded while reading body
I0827 14:46:54.549] Successful
I0827 14:46:54.550] message:NAME        READY   STATUS    RESTARTS   AGE
I0827 14:46:54.550] valid-pod   0/1     Pending   0          1s
I0827 14:46:54.550] has:valid-pod
I0827 14:46:54.621] Successful
I0827 14:46:54.621] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0827 14:46:54.621] has:Invalid timeout value
I0827 14:46:54.711] pod "valid-pod" deleted
I0827 14:46:54.733] +++ exit code: 0
I0827 14:46:54.769] Recording: run_crd_tests
I0827 14:46:54.769] Running command: run_crd_tests
I0827 14:46:54.795] 
... skipping 236 lines ...
I0827 14:46:59.637] foo.company.com/test patched
I0827 14:46:59.742] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0827 14:46:59.830] (Bfoo.company.com/test patched
I0827 14:46:59.938] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0827 14:47:00.039] (Bfoo.company.com/test patched
I0827 14:47:00.147] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0827 14:47:00.350] (B+++ [0827 14:47:00] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0827 14:47:00.434] {
I0827 14:47:00.434]     "apiVersion": "company.com/v1",
I0827 14:47:00.435]     "kind": "Foo",
I0827 14:47:00.435]     "metadata": {
I0827 14:47:00.435]         "annotations": {
I0827 14:47:00.435]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 293 lines ...
I0827 14:47:20.435] (Bnamespace/non-native-resources created
I0827 14:47:20.607] bar.company.com/test created
I0827 14:47:20.719] crd.sh:455: Successful get bars {{len .items}}: 1
I0827 14:47:20.809] (Bnamespace "non-native-resources" deleted
I0827 14:47:26.049] crd.sh:458: Successful get bars {{len .items}}: 0
I0827 14:47:26.234] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0827 14:47:26.335] Error from server (NotFound): namespaces "non-native-resources" not found
I0827 14:47:26.435] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0827 14:47:26.451] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0827 14:47:26.570] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0827 14:47:26.607] +++ exit code: 0
I0827 14:47:26.649] Recording: run_cmd_with_img_tests
I0827 14:47:26.649] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0827 14:47:26.993] I0827 14:47:26.992047   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917246-82", Name:"test1-9797f89d8", UID:"1553884f-580d-4cce-aa9a-ccdf38237052", APIVersion:"apps/v1", ResourceVersion:"928", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-tqb97
I0827 14:47:27.094] Successful
I0827 14:47:27.094] message:deployment.apps/test1 created
I0827 14:47:27.095] has:deployment.apps/test1 created
I0827 14:47:27.095] deployment.apps "test1" deleted
I0827 14:47:27.181] Successful
I0827 14:47:27.182] message:error: Invalid image name "InvalidImageName": invalid reference format
I0827 14:47:27.182] has:error: Invalid image name "InvalidImageName": invalid reference format
I0827 14:47:27.201] +++ exit code: 0
I0827 14:47:27.255] +++ [0827 14:47:27] Testing recursive resources
I0827 14:47:27.263] +++ [0827 14:47:27] Creating namespace namespace-1566917247-11670
W0827 14:47:27.366] W0827 14:47:27.249340   49414 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0827 14:47:27.367] E0827 14:47:27.251165   52919 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:27.369] W0827 14:47:27.356182   49414 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0827 14:47:27.369] E0827 14:47:27.361217   52919 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:27.470] namespace/namespace-1566917247-11670 created
I0827 14:47:27.470] Context "test" modified.
I0827 14:47:27.564] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:27.861] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:27.865] (BSuccessful
I0827 14:47:27.866] message:pod/busybox0 created
I0827 14:47:27.866] pod/busybox1 created
I0827 14:47:27.866] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0827 14:47:27.867] has:error validating data: kind not set
I0827 14:47:27.966] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:28.168] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0827 14:47:28.171] (BSuccessful
I0827 14:47:28.172] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0827 14:47:28.172] has:Object 'Kind' is missing
I0827 14:47:28.279] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:28.635] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0827 14:47:28.638] (BSuccessful
I0827 14:47:28.639] message:pod/busybox0 replaced
I0827 14:47:28.639] pod/busybox1 replaced
I0827 14:47:28.639] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0827 14:47:28.640] has:error validating data: kind not set
I0827 14:47:28.736] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:28.835] (BSuccessful
I0827 14:47:28.836] message:Name:         busybox0
I0827 14:47:28.836] Namespace:    namespace-1566917247-11670
I0827 14:47:28.836] Priority:     0
I0827 14:47:28.837] Node:         <none>
... skipping 159 lines ...
I0827 14:47:28.859] has:Object 'Kind' is missing
I0827 14:47:28.935] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:29.132] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0827 14:47:29.135] (BSuccessful
I0827 14:47:29.136] message:pod/busybox0 annotated
I0827 14:47:29.136] pod/busybox1 annotated
I0827 14:47:29.136] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0827 14:47:29.137] has:Object 'Kind' is missing
I0827 14:47:29.242] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:29.546] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0827 14:47:29.550] (BSuccessful
I0827 14:47:29.551] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0827 14:47:29.551] pod/busybox0 configured
I0827 14:47:29.551] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0827 14:47:29.551] pod/busybox1 configured
I0827 14:47:29.551] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0827 14:47:29.551] has:error validating data: kind not set
I0827 14:47:29.644] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:29.817] (Bdeployment.apps/nginx created
W0827 14:47:29.918] W0827 14:47:27.473214   49414 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0827 14:47:29.919] E0827 14:47:27.474706   52919 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.919] W0827 14:47:27.582164   49414 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0827 14:47:29.919] E0827 14:47:27.583768   52919 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.920] E0827 14:47:28.252702   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.920] E0827 14:47:28.363359   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.920] E0827 14:47:28.477438   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.921] E0827 14:47:28.585142   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.921] E0827 14:47:29.254388   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.921] E0827 14:47:29.364967   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.921] E0827 14:47:29.478881   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.922] E0827 14:47:29.586851   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:29.922] I0827 14:47:29.821856   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917247-11670", Name:"nginx", UID:"e72e8697-2f4a-4fa6-9c91-49f3f26c0b44", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0827 14:47:29.923] I0827 14:47:29.825522   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917247-11670", Name:"nginx-bbbbb95b5", UID:"fe3a5013-e039-429c-a146-c96d5d944b60", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-kk76w
W0827 14:47:29.923] I0827 14:47:29.832217   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917247-11670", Name:"nginx-bbbbb95b5", UID:"fe3a5013-e039-429c-a146-c96d5d944b60", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-jqcd8
W0827 14:47:29.924] I0827 14:47:29.832446   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917247-11670", Name:"nginx-bbbbb95b5", UID:"fe3a5013-e039-429c-a146-c96d5d944b60", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-rw7b5
I0827 14:47:30.024] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0827 14:47:30.027] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 41 lines ...
I0827 14:47:30.218]       terminationGracePeriodSeconds: 30
I0827 14:47:30.218] status: {}
I0827 14:47:30.218] has:extensions/v1beta1
I0827 14:47:30.298] deployment.apps "nginx" deleted
W0827 14:47:30.398] kubectl convert is DEPRECATED and will be removed in a future version.
W0827 14:47:30.399] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0827 14:47:30.399] E0827 14:47:30.255885   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:30.400] E0827 14:47:30.366659   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:30.481] E0827 14:47:30.480600   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:30.581] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:30.593] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:30.596] (BSuccessful
I0827 14:47:30.597] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0827 14:47:30.597] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0827 14:47:30.598] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0827 14:47:30.598] has:Object 'Kind' is missing
I0827 14:47:30.694] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:30.788] (BSuccessful
I0827 14:47:30.789] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0827 14:47:30.789] has:busybox0:busybox1:
I0827 14:47:30.792] Successful
I0827 14:47:30.792] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0827 14:47:30.792] has:Object 'Kind' is missing
W0827 14:47:30.893] E0827 14:47:30.588214   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:30.994] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:31.019] (Bpod/busybox0 labeled
I0827 14:47:31.020] pod/busybox1 labeled
I0827 14:47:31.020] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0827 14:47:31.122] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0827 14:47:31.125] (BSuccessful
I0827 14:47:31.125] message:pod/busybox0 labeled
I0827 14:47:31.125] pod/busybox1 labeled
I0827 14:47:31.126] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0827 14:47:31.126] has:Object 'Kind' is missing
I0827 14:47:31.226] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:31.330] (Bpod/busybox0 patched
I0827 14:47:31.330] pod/busybox1 patched
I0827 14:47:31.331] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0827 14:47:31.432] E0827 14:47:31.257942   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:31.433] E0827 14:47:31.368579   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:31.482] E0827 14:47:31.482098   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:31.583] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0827 14:47:31.584] (BSuccessful
I0827 14:47:31.584] message:pod/busybox0 patched
I0827 14:47:31.584] pod/busybox1 patched
I0827 14:47:31.585] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0827 14:47:31.585] has:Object 'Kind' is missing
I0827 14:47:31.585] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:31.749] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:31.752] (BSuccessful
I0827 14:47:31.752] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0827 14:47:31.752] pod "busybox0" force deleted
I0827 14:47:31.752] pod "busybox1" force deleted
I0827 14:47:31.753] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0827 14:47:31.753] has:Object 'Kind' is missing
I0827 14:47:31.851] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:32.014] (Breplicationcontroller/busybox0 created
I0827 14:47:32.023] replicationcontroller/busybox1 created
W0827 14:47:32.124] E0827 14:47:31.589948   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:32.124] I0827 14:47:32.018349   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917247-11670", Name:"busybox0", UID:"0bfa88f4-8f7f-4ae0-9f45-b7a4c799ab54", APIVersion:"v1", ResourceVersion:"984", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-zzz5j
W0827 14:47:32.124] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0827 14:47:32.125] I0827 14:47:32.027095   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917247-11670", Name:"busybox1", UID:"f102e9f3-2483-4853-b8f0-73ee64651a90", APIVersion:"v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-497bd
I0827 14:47:32.225] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:32.238] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:32.335] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0827 14:47:32.434] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0827 14:47:32.660] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0827 14:47:32.765] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0827 14:47:32.768] (BSuccessful
I0827 14:47:32.769] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0827 14:47:32.769] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0827 14:47:32.769] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:32.770] has:Object 'Kind' is missing
I0827 14:47:32.854] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0827 14:47:32.951] horizontalpodautoscaler.autoscaling "busybox1" deleted
W0827 14:47:33.051] E0827 14:47:32.259324   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:33.052] E0827 14:47:32.369740   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:33.052] E0827 14:47:32.483525   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:33.053] E0827 14:47:32.591557   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:33.153] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:33.172] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0827 14:47:33.274] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0827 14:47:33.489] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0827 14:47:33.595] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0827 14:47:33.598] (BSuccessful
I0827 14:47:33.598] message:service/busybox0 exposed
I0827 14:47:33.599] service/busybox1 exposed
I0827 14:47:33.599] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:33.599] has:Object 'Kind' is missing
W0827 14:47:33.700] E0827 14:47:33.262683   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:33.700] E0827 14:47:33.371377   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:33.701] E0827 14:47:33.485636   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:33.701] E0827 14:47:33.593245   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:33.802] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:33.818] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0827 14:47:33.921] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0827 14:47:34.143] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0827 14:47:34.241] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0827 14:47:34.243] (BSuccessful
I0827 14:47:34.243] message:replicationcontroller/busybox0 scaled
I0827 14:47:34.244] replicationcontroller/busybox1 scaled
I0827 14:47:34.245] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:34.245] has:Object 'Kind' is missing
I0827 14:47:34.346] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:34.543] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:34.546] (BSuccessful
I0827 14:47:34.546] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0827 14:47:34.546] replicationcontroller "busybox0" force deleted
I0827 14:47:34.547] replicationcontroller "busybox1" force deleted
I0827 14:47:34.547] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:34.547] has:Object 'Kind' is missing
I0827 14:47:34.651] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:34.839] (Bdeployment.apps/nginx1-deployment created
I0827 14:47:34.846] deployment.apps/nginx0-deployment created
W0827 14:47:34.947] I0827 14:47:34.024639   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917247-11670", Name:"busybox0", UID:"0bfa88f4-8f7f-4ae0-9f45-b7a4c799ab54", APIVersion:"v1", ResourceVersion:"1005", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-j9dt5
W0827 14:47:34.948] I0827 14:47:34.042311   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917247-11670", Name:"busybox1", UID:"f102e9f3-2483-4853-b8f0-73ee64651a90", APIVersion:"v1", ResourceVersion:"1009", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-pdt6g
W0827 14:47:34.948] E0827 14:47:34.264571   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:34.949] E0827 14:47:34.373005   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:34.949] E0827 14:47:34.487266   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:34.949] E0827 14:47:34.595061   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:34.949] I0827 14:47:34.840449   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917247-11670", Name:"nginx1-deployment", UID:"54bfa926-0a94-4e6d-b96a-dc7f13757e24", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0827 14:47:34.950] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0827 14:47:34.950] I0827 14:47:34.848736   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917247-11670", Name:"nginx1-deployment-84f7f49fb7", UID:"5f5c6c2f-836c-424f-9b44-f0ea789ea037", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-xv94b
W0827 14:47:34.951] I0827 14:47:34.850002   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917247-11670", Name:"nginx0-deployment", UID:"3d591f26-dacd-482b-a86d-8f54617ccd89", APIVersion:"apps/v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0827 14:47:34.951] I0827 14:47:34.852630   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917247-11670", Name:"nginx1-deployment-84f7f49fb7", UID:"5f5c6c2f-836c-424f-9b44-f0ea789ea037", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-2kbrc
W0827 14:47:34.951] I0827 14:47:34.854548   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917247-11670", Name:"nginx0-deployment-57475bf54d", UID:"3707e971-ad5f-4261-b95b-f36464acf362", APIVersion:"apps/v1", ResourceVersion:"1031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-gw9zz
W0827 14:47:34.952] I0827 14:47:34.859090   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917247-11670", Name:"nginx0-deployment-57475bf54d", UID:"3707e971-ad5f-4261-b95b-f36464acf362", APIVersion:"apps/v1", ResourceVersion:"1031", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-dkfpk
I0827 14:47:35.052] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0827 14:47:35.059] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0827 14:47:35.272] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0827 14:47:35.274] (BSuccessful
I0827 14:47:35.275] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0827 14:47:35.275] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0827 14:47:35.275] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0827 14:47:35.275] has:Object 'Kind' is missing
I0827 14:47:35.366] deployment.apps/nginx1-deployment paused
I0827 14:47:35.371] deployment.apps/nginx0-deployment paused
W0827 14:47:35.472] E0827 14:47:35.266326   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:35.473] E0827 14:47:35.374049   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:35.489] E0827 14:47:35.488649   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:35.590] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0827 14:47:35.590] (BSuccessful
I0827 14:47:35.591] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0827 14:47:35.591] has:Object 'Kind' is missing
I0827 14:47:35.592] deployment.apps/nginx1-deployment resumed
I0827 14:47:35.593] deployment.apps/nginx0-deployment resumed
W0827 14:47:35.693] E0827 14:47:35.596243   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:35.794] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0827 14:47:35.794] (BSuccessful
I0827 14:47:35.795] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0827 14:47:35.795] has:Object 'Kind' is missing
I0827 14:47:35.815] Successful
I0827 14:47:35.815] message:deployment.apps/nginx1-deployment 
I0827 14:47:35.816] REVISION  CHANGE-CAUSE
I0827 14:47:35.816] 1         <none>
I0827 14:47:35.816] 
I0827 14:47:35.816] deployment.apps/nginx0-deployment 
I0827 14:47:35.816] REVISION  CHANGE-CAUSE
I0827 14:47:35.816] 1         <none>
I0827 14:47:35.816] 
I0827 14:47:35.817] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0827 14:47:35.817] has:nginx0-deployment
I0827 14:47:35.818] Successful
I0827 14:47:35.818] message:deployment.apps/nginx1-deployment 
I0827 14:47:35.818] REVISION  CHANGE-CAUSE
I0827 14:47:35.818] 1         <none>
I0827 14:47:35.818] 
I0827 14:47:35.818] deployment.apps/nginx0-deployment 
I0827 14:47:35.819] REVISION  CHANGE-CAUSE
I0827 14:47:35.819] 1         <none>
I0827 14:47:35.819] 
I0827 14:47:35.819] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0827 14:47:35.819] has:nginx1-deployment
I0827 14:47:35.821] Successful
I0827 14:47:35.821] message:deployment.apps/nginx1-deployment 
I0827 14:47:35.821] REVISION  CHANGE-CAUSE
I0827 14:47:35.821] 1         <none>
I0827 14:47:35.821] 
I0827 14:47:35.821] deployment.apps/nginx0-deployment 
I0827 14:47:35.822] REVISION  CHANGE-CAUSE
I0827 14:47:35.822] 1         <none>
I0827 14:47:35.822] 
I0827 14:47:35.822] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0827 14:47:35.822] has:Object 'Kind' is missing
I0827 14:47:35.906] deployment.apps "nginx1-deployment" force deleted
I0827 14:47:35.911] deployment.apps "nginx0-deployment" force deleted
W0827 14:47:36.012] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0827 14:47:36.013] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0827 14:47:36.268] E0827 14:47:36.267874   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:36.376] E0827 14:47:36.375785   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:36.492] E0827 14:47:36.491236   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:36.599] E0827 14:47:36.598310   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:37.015] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:37.178] (Breplicationcontroller/busybox0 created
I0827 14:47:37.183] replicationcontroller/busybox1 created
W0827 14:47:37.283] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0827 14:47:37.284] I0827 14:47:37.182615   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917247-11670", Name:"busybox0", UID:"8c614e15-76fb-420a-beea-cea4ccda3c9a", APIVersion:"v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-fcnmf
W0827 14:47:37.284] I0827 14:47:37.188926   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917247-11670", Name:"busybox1", UID:"0569d37e-25f2-4ee6-a146-1d6579ae8c7c", APIVersion:"v1", ResourceVersion:"1077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-vhjwg
W0827 14:47:37.285] E0827 14:47:37.269119   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:37.378] E0827 14:47:37.377249   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:37.478] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0827 14:47:37.479] (BSuccessful
I0827 14:47:37.479] message:no rollbacker has been implemented for "ReplicationController"
I0827 14:47:37.479] no rollbacker has been implemented for "ReplicationController"
I0827 14:47:37.479] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:37.480] has:no rollbacker has been implemented for "ReplicationController"
I0827 14:47:37.480] Successful
I0827 14:47:37.480] message:no rollbacker has been implemented for "ReplicationController"
I0827 14:47:37.480] no rollbacker has been implemented for "ReplicationController"
I0827 14:47:37.480] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:37.481] has:Object 'Kind' is missing
I0827 14:47:37.509] Successful
I0827 14:47:37.510] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:37.510] error: replicationcontrollers "busybox0" pausing is not supported
I0827 14:47:37.511] error: replicationcontrollers "busybox1" pausing is not supported
I0827 14:47:37.511] has:Object 'Kind' is missing
I0827 14:47:37.511] Successful
I0827 14:47:37.512] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:37.512] error: replicationcontrollers "busybox0" pausing is not supported
I0827 14:47:37.512] error: replicationcontrollers "busybox1" pausing is not supported
I0827 14:47:37.513] has:replicationcontrollers "busybox0" pausing is not supported
I0827 14:47:37.515] Successful
I0827 14:47:37.516] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:37.516] error: replicationcontrollers "busybox0" pausing is not supported
I0827 14:47:37.517] error: replicationcontrollers "busybox1" pausing is not supported
I0827 14:47:37.517] has:replicationcontrollers "busybox1" pausing is not supported
W0827 14:47:37.617] E0827 14:47:37.492953   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:37.618] E0827 14:47:37.600267   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:37.701] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0827 14:47:37.730] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:37.831] Successful
I0827 14:47:37.832] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:37.832] error: replicationcontrollers "busybox0" resuming is not supported
I0827 14:47:37.832] error: replicationcontrollers "busybox1" resuming is not supported
I0827 14:47:37.833] has:Object 'Kind' is missing
I0827 14:47:37.833] Successful
I0827 14:47:37.833] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:37.833] error: replicationcontrollers "busybox0" resuming is not supported
I0827 14:47:37.834] error: replicationcontrollers "busybox1" resuming is not supported
I0827 14:47:37.834] has:replicationcontrollers "busybox0" resuming is not supported
I0827 14:47:37.834] Successful
I0827 14:47:37.834] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0827 14:47:37.835] error: replicationcontrollers "busybox0" resuming is not supported
I0827 14:47:37.835] error: replicationcontrollers "busybox1" resuming is not supported
I0827 14:47:37.835] has:replicationcontrollers "busybox0" resuming is not supported
I0827 14:47:37.835] replicationcontroller "busybox0" force deleted
I0827 14:47:37.835] replicationcontroller "busybox1" force deleted
W0827 14:47:38.271] E0827 14:47:38.270691   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:38.379] E0827 14:47:38.378859   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:38.495] E0827 14:47:38.494687   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:38.602] E0827 14:47:38.601740   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:38.761] Recording: run_namespace_tests
I0827 14:47:38.762] Running command: run_namespace_tests
I0827 14:47:38.789] 
I0827 14:47:38.793] +++ Running case: test-cmd.run_namespace_tests 
I0827 14:47:38.797] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:47:38.803] +++ command: run_namespace_tests
I0827 14:47:38.815] +++ [0827 14:47:38] Testing kubectl(v1:namespaces)
I0827 14:47:38.893] namespace/my-namespace created
I0827 14:47:38.988] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0827 14:47:39.070] (Bnamespace "my-namespace" deleted
W0827 14:47:39.273] E0827 14:47:39.272549   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:39.380] E0827 14:47:39.380257   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:39.497] E0827 14:47:39.496321   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:39.604] E0827 14:47:39.603446   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:40.275] E0827 14:47:40.274363   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:40.382] E0827 14:47:40.381700   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:40.499] E0827 14:47:40.498314   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:40.605] E0827 14:47:40.605067   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:41.276] E0827 14:47:41.275882   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:41.383] E0827 14:47:41.382938   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:41.500] E0827 14:47:41.499840   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:41.607] E0827 14:47:41.606631   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:42.278] E0827 14:47:42.277304   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:42.385] E0827 14:47:42.384523   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:42.502] E0827 14:47:42.501391   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:42.609] E0827 14:47:42.608780   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:43.279] E0827 14:47:43.278906   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:43.386] E0827 14:47:43.385949   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:43.503] E0827 14:47:43.503123   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:43.611] E0827 14:47:43.610988   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:43.898] I0827 14:47:43.897275   52919 shared_informer.go:197] Waiting for caches to sync for resource quota
W0827 14:47:43.999] I0827 14:47:43.997593   52919 shared_informer.go:204] Caches are synced for resource quota 
I0827 14:47:44.204] namespace/my-namespace condition met
W0827 14:47:44.307] E0827 14:47:44.280517   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:44.389] E0827 14:47:44.388841   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:44.415] I0827 14:47:44.414597   52919 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0827 14:47:44.505] E0827 14:47:44.504344   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:44.515] I0827 14:47:44.514974   52919 shared_informer.go:204] Caches are synced for garbage collector 
W0827 14:47:44.612] E0827 14:47:44.612317   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:44.713] Successful
I0827 14:47:44.714] message:Error from server (NotFound): namespaces "my-namespace" not found
I0827 14:47:44.714] has: not found
I0827 14:47:44.714] namespace/my-namespace created
I0827 14:47:44.714] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0827 14:47:44.756] (BSuccessful
I0827 14:47:44.757] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0827 14:47:44.757] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0827 14:47:44.762] namespace "namespace-1566917211-4803" deleted
I0827 14:47:44.762] namespace "namespace-1566917212-30721" deleted
I0827 14:47:44.762] namespace "namespace-1566917214-22357" deleted
I0827 14:47:44.762] namespace "namespace-1566917216-13754" deleted
I0827 14:47:44.762] namespace "namespace-1566917246-82" deleted
I0827 14:47:44.762] namespace "namespace-1566917247-11670" deleted
I0827 14:47:44.762] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0827 14:47:44.763] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0827 14:47:44.763] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0827 14:47:44.763] has:warning: deleting cluster-scoped resources
I0827 14:47:44.763] Successful
I0827 14:47:44.763] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0827 14:47:44.763] namespace "kube-node-lease" deleted
I0827 14:47:44.763] namespace "my-namespace" deleted
I0827 14:47:44.763] namespace "namespace-1566917112-7426" deleted
... skipping 27 lines ...
I0827 14:47:44.766] namespace "namespace-1566917211-4803" deleted
I0827 14:47:44.766] namespace "namespace-1566917212-30721" deleted
I0827 14:47:44.766] namespace "namespace-1566917214-22357" deleted
I0827 14:47:44.766] namespace "namespace-1566917216-13754" deleted
I0827 14:47:44.767] namespace "namespace-1566917246-82" deleted
I0827 14:47:44.767] namespace "namespace-1566917247-11670" deleted
I0827 14:47:44.767] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0827 14:47:44.767] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0827 14:47:44.767] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0827 14:47:44.767] has:namespace "my-namespace" deleted
I0827 14:47:44.869] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0827 14:47:44.946] (Bnamespace/other created
I0827 14:47:45.046] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0827 14:47:45.141] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:45.301] (Bpod/valid-pod created
I0827 14:47:45.402] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0827 14:47:45.501] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0827 14:47:45.589] (BSuccessful
I0827 14:47:45.590] message:error: a resource cannot be retrieved by name across all namespaces
I0827 14:47:45.590] has:a resource cannot be retrieved by name across all namespaces
I0827 14:47:45.683] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0827 14:47:45.770] (Bpod "valid-pod" force deleted
I0827 14:47:45.872] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:45.954] (Bnamespace "other" deleted
W0827 14:47:46.055] E0827 14:47:45.282108   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:46.056] E0827 14:47:45.390187   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:46.056] E0827 14:47:45.505963   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:46.056] E0827 14:47:45.614042   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:46.057] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0827 14:47:46.285] E0827 14:47:46.284045   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:46.392] E0827 14:47:46.391747   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:46.508] E0827 14:47:46.507582   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:46.616] E0827 14:47:46.615422   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:47.287] E0827 14:47:47.286110   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:47.394] E0827 14:47:47.393742   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:47.510] E0827 14:47:47.509505   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:47.541] I0827 14:47:47.540426   52919 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1566917247-11670
W0827 14:47:47.552] I0827 14:47:47.550719   52919 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1566917247-11670
W0827 14:47:47.619] E0827 14:47:47.617764   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:48.289] E0827 14:47:48.288224   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:48.397] E0827 14:47:48.396288   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:48.512] E0827 14:47:48.511465   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:48.621] E0827 14:47:48.619998   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:49.290] E0827 14:47:49.289817   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:49.399] E0827 14:47:49.398467   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:49.514] E0827 14:47:49.513269   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:49.622] E0827 14:47:49.621620   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:50.291] E0827 14:47:50.290726   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:50.400] E0827 14:47:50.399952   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:50.515] E0827 14:47:50.514436   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:50.623] E0827 14:47:50.623293   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:51.111] +++ exit code: 0
I0827 14:47:51.148] Recording: run_secrets_test
I0827 14:47:51.149] Running command: run_secrets_test
I0827 14:47:51.174] 
I0827 14:47:51.177] +++ Running case: test-cmd.run_secrets_test 
I0827 14:47:51.181] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 40 lines ...
I0827 14:47:51.464] has not:example.com
I0827 14:47:51.562] core.sh:725: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
I0827 14:47:51.643] (Bnamespace/test-secrets created
I0827 14:47:51.742] core.sh:729: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
I0827 14:47:51.837] (Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:51.918] (Bsecret/test-secret created
W0827 14:47:52.020] E0827 14:47:51.292205   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:52.020] E0827 14:47:51.401886   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:52.020] I0827 14:47:51.442994   68999 loader.go:375] Config loaded from file:  /tmp/tmp.K9mQLxsNtY/.kube/config
W0827 14:47:52.021] E0827 14:47:51.516238   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:52.021] E0827 14:47:51.624717   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:52.121] core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0827 14:47:52.122] (Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
I0827 14:47:52.300] (Bsecret "test-secret" deleted
W0827 14:47:52.401] E0827 14:47:52.295467   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:52.404] E0827 14:47:52.403349   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:52.504] core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:52.505] (Bsecret/test-secret created
W0827 14:47:52.606] E0827 14:47:52.517848   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:52.627] E0827 14:47:52.626295   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:52.727] core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0827 14:47:52.728] (Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
I0827 14:47:52.907] (Bsecret "test-secret" deleted
I0827 14:47:53.015] core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:53.110] (Bsecret/test-secret created
I0827 14:47:53.211] core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0827 14:47:53.303] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0827 14:47:53.403] (Bsecret "test-secret" deleted
W0827 14:47:53.504] E0827 14:47:53.296827   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:53.505] E0827 14:47:53.405098   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:53.520] E0827 14:47:53.519355   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:53.620] secret/test-secret created
I0827 14:47:53.621] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0827 14:47:53.714] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0827 14:47:53.801] (Bsecret "test-secret" deleted
W0827 14:47:53.901] E0827 14:47:53.627797   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:54.002] secret/secret-string-data created
I0827 14:47:54.073] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0827 14:47:54.183] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0827 14:47:54.283] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0827 14:47:54.371] (Bsecret "secret-string-data" deleted
W0827 14:47:54.471] E0827 14:47:54.298662   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:54.472] E0827 14:47:54.406615   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:54.522] E0827 14:47:54.521395   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:54.622] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:47:54.649] (Bsecret "test-secret" deleted
I0827 14:47:54.737] namespace "test-secrets" deleted
W0827 14:47:54.838] E0827 14:47:54.629230   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:55.300] E0827 14:47:55.300317   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:55.409] E0827 14:47:55.408484   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:55.523] E0827 14:47:55.523090   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:55.631] E0827 14:47:55.630795   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:56.302] E0827 14:47:56.302056   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:56.410] E0827 14:47:56.410150   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:56.526] E0827 14:47:56.525714   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:56.633] E0827 14:47:56.632561   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:57.304] E0827 14:47:57.303635   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:57.412] E0827 14:47:57.411785   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:57.528] E0827 14:47:57.527575   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:57.635] E0827 14:47:57.634821   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:58.306] E0827 14:47:58.305285   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:58.414] E0827 14:47:58.413538   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:58.530] E0827 14:47:58.529642   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:58.637] E0827 14:47:58.636535   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:59.307] E0827 14:47:59.307239   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:59.415] E0827 14:47:59.415241   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:59.532] E0827 14:47:59.531508   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:47:59.638] E0827 14:47:59.638038   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:47:59.862] +++ exit code: 0
I0827 14:47:59.902] Recording: run_configmap_tests
I0827 14:47:59.903] Running command: run_configmap_tests
I0827 14:47:59.932] 
I0827 14:47:59.935] +++ Running case: test-cmd.run_configmap_tests 
I0827 14:47:59.938] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:47:59.943] +++ command: run_configmap_tests
I0827 14:47:59.957] +++ [0827 14:47:59] Creating namespace namespace-1566917279-5698
I0827 14:48:00.039] namespace/namespace-1566917279-5698 created
I0827 14:48:00.136] Context "test" modified.
I0827 14:48:00.144] +++ [0827 14:48:00] Testing configmaps
W0827 14:48:00.315] E0827 14:48:00.310917   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:00.416] configmap/test-configmap created
I0827 14:48:00.491] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0827 14:48:00.579] (Bconfigmap "test-configmap" deleted
W0827 14:48:00.680] E0827 14:48:00.417214   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:00.681] E0827 14:48:00.532922   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:00.681] E0827 14:48:00.639533   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:00.782] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0827 14:48:00.784] (Bnamespace/test-configmaps created
I0827 14:48:00.872] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
I0827 14:48:00.978] (Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
I0827 14:48:01.085] (Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
I0827 14:48:01.175] (Bconfigmap/test-configmap created
I0827 14:48:01.261] configmap/test-binary-configmap created
I0827 14:48:01.360] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0827 14:48:01.456] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0827 14:48:01.714] (Bconfigmap "test-configmap" deleted
I0827 14:48:01.797] configmap "test-binary-configmap" deleted
I0827 14:48:01.882] namespace "test-configmaps" deleted
W0827 14:48:01.983] E0827 14:48:01.312313   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:01.984] E0827 14:48:01.419153   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:01.984] E0827 14:48:01.534829   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:01.984] E0827 14:48:01.641352   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:02.320] E0827 14:48:02.320241   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:02.421] E0827 14:48:02.420914   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:02.536] E0827 14:48:02.536240   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:02.643] E0827 14:48:02.643137   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:03.323] E0827 14:48:03.322687   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:03.423] E0827 14:48:03.422711   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:03.539] E0827 14:48:03.538877   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:03.645] E0827 14:48:03.644918   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:04.325] E0827 14:48:04.324657   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:04.424] E0827 14:48:04.424266   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:04.540] E0827 14:48:04.540217   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:04.646] E0827 14:48:04.646282   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:05.326] E0827 14:48:05.326266   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:05.426] E0827 14:48:05.425777   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:05.542] E0827 14:48:05.541732   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:05.648] E0827 14:48:05.647899   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:06.328] E0827 14:48:06.327892   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:06.427] E0827 14:48:06.427247   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:06.544] E0827 14:48:06.543463   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:06.650] E0827 14:48:06.649477   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:07.031] +++ exit code: 0
I0827 14:48:07.077] Recording: run_client_config_tests
I0827 14:48:07.078] Running command: run_client_config_tests
I0827 14:48:07.105] 
I0827 14:48:07.109] +++ Running case: test-cmd.run_client_config_tests 
I0827 14:48:07.113] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:48:07.117] +++ command: run_client_config_tests
I0827 14:48:07.132] +++ [0827 14:48:07] Creating namespace namespace-1566917287-26244
I0827 14:48:07.217] namespace/namespace-1566917287-26244 created
I0827 14:48:07.304] Context "test" modified.
I0827 14:48:07.311] +++ [0827 14:48:07] Testing client config
I0827 14:48:07.398] Successful
I0827 14:48:07.398] message:error: stat missing: no such file or directory
I0827 14:48:07.399] has:missing: no such file or directory
I0827 14:48:07.471] Successful
I0827 14:48:07.471] message:error: stat missing: no such file or directory
I0827 14:48:07.472] has:missing: no such file or directory
I0827 14:48:07.542] Successful
I0827 14:48:07.542] message:error: stat missing: no such file or directory
I0827 14:48:07.542] has:missing: no such file or directory
I0827 14:48:07.628] Successful
I0827 14:48:07.628] message:Error in configuration: context was not found for specified context: missing-context
I0827 14:48:07.628] has:context was not found for specified context: missing-context
I0827 14:48:07.702] Successful
I0827 14:48:07.702] message:error: no server found for cluster "missing-cluster"
I0827 14:48:07.702] has:no server found for cluster "missing-cluster"
I0827 14:48:07.778] Successful
I0827 14:48:07.779] message:error: auth info "missing-user" does not exist
I0827 14:48:07.779] has:auth info "missing-user" does not exist
W0827 14:48:07.880] E0827 14:48:07.329390   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:07.880] E0827 14:48:07.429339   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:07.880] E0827 14:48:07.545273   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:07.881] E0827 14:48:07.651218   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:07.981] Successful
I0827 14:48:07.982] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0827 14:48:07.982] has:error loading config file
I0827 14:48:07.997] Successful
I0827 14:48:07.997] message:error: stat missing-config: no such file or directory
I0827 14:48:07.997] has:no such file or directory
I0827 14:48:08.013] +++ exit code: 0
I0827 14:48:08.052] Recording: run_service_accounts_tests
I0827 14:48:08.052] Running command: run_service_accounts_tests
I0827 14:48:08.080] 
I0827 14:48:08.085] +++ Running case: test-cmd.run_service_accounts_tests 
I0827 14:48:08.087] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:48:08.091] +++ command: run_service_accounts_tests
I0827 14:48:08.103] +++ [0827 14:48:08] Creating namespace namespace-1566917288-3303
I0827 14:48:08.180] namespace/namespace-1566917288-3303 created
I0827 14:48:08.260] Context "test" modified.
I0827 14:48:08.267] +++ [0827 14:48:08] Testing service accounts
W0827 14:48:08.368] E0827 14:48:08.330807   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:08.431] E0827 14:48:08.430794   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:08.532] core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
I0827 14:48:08.532] (Bnamespace/test-service-accounts created
I0827 14:48:08.564] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0827 14:48:08.648] (Bserviceaccount/test-service-account created
I0827 14:48:08.752] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0827 14:48:08.832] (Bserviceaccount "test-service-account" deleted
I0827 14:48:08.925] namespace "test-service-accounts" deleted
W0827 14:48:09.026] E0827 14:48:08.546849   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:09.026] E0827 14:48:08.653138   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:09.333] E0827 14:48:09.332312   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:09.433] E0827 14:48:09.432693   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:09.549] E0827 14:48:09.548860   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:09.655] E0827 14:48:09.654635   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:10.335] E0827 14:48:10.334267   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:10.435] E0827 14:48:10.434244   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:10.551] E0827 14:48:10.550303   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:10.656] E0827 14:48:10.655828   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:11.337] E0827 14:48:11.336275   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:11.436] E0827 14:48:11.435808   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:11.552] E0827 14:48:11.551863   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:11.658] E0827 14:48:11.657501   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:12.338] E0827 14:48:12.337875   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:12.438] E0827 14:48:12.437612   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:12.554] E0827 14:48:12.553744   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:12.660] E0827 14:48:12.659555   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:13.340] E0827 14:48:13.339496   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:13.440] E0827 14:48:13.439125   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:13.556] E0827 14:48:13.555568   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:13.662] E0827 14:48:13.661469   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:14.069] +++ exit code: 0
I0827 14:48:14.113] Recording: run_job_tests
I0827 14:48:14.113] Running command: run_job_tests
I0827 14:48:14.142] 
I0827 14:48:14.145] +++ Running case: test-cmd.run_job_tests 
I0827 14:48:14.149] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:48:14.152] +++ command: run_job_tests
I0827 14:48:14.166] +++ [0827 14:48:14] Creating namespace namespace-1566917294-8936
I0827 14:48:14.259] namespace/namespace-1566917294-8936 created
I0827 14:48:14.351] Context "test" modified.
I0827 14:48:14.358] +++ [0827 14:48:14] Testing job
W0827 14:48:14.459] E0827 14:48:14.341128   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:14.460] E0827 14:48:14.441061   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:14.558] E0827 14:48:14.557828   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:14.659] batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
I0827 14:48:14.659] (Bnamespace/test-jobs created
I0827 14:48:14.661] batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
I0827 14:48:14.750] (Bcronjob.batch/pi created
W0827 14:48:14.851] E0827 14:48:14.663216   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:14.851] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
I0827 14:48:14.952] batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
I0827 14:48:14.952] (BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
I0827 14:48:14.952] pi     59 23 31 2 *   False     0        <none>          0s
I0827 14:48:15.037] Name:                          pi
I0827 14:48:15.037] Namespace:                     test-jobs
I0827 14:48:15.038] Labels:                        run=pi
I0827 14:48:15.038] Annotations:                   <none>
I0827 14:48:15.038] Schedule:                      59 23 31 2 *
I0827 14:48:15.039] Concurrency Policy:            Allow
I0827 14:48:15.039] Suspend:                       False
I0827 14:48:15.039] Successful Job History Limit:  3
I0827 14:48:15.039] Failed Job History Limit:      1
I0827 14:48:15.040] Starting Deadline Seconds:     <unset>
I0827 14:48:15.040] Selector:                      <unset>
I0827 14:48:15.040] Parallelism:                   <unset>
I0827 14:48:15.040] Completions:                   <unset>
I0827 14:48:15.040] Pod Template:
I0827 14:48:15.040]   Labels:  run=pi
... skipping 22 lines ...
I0827 14:48:15.226] batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
I0827 14:48:15.327] (Bjob.batch/test-job created
I0827 14:48:15.431] batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
I0827 14:48:15.518] (BNAME       COMPLETIONS   DURATION   AGE
I0827 14:48:15.518] test-job   0/1           0s         0s
W0827 14:48:15.619] I0827 14:48:15.329647   52919 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"82392adb-b8e5-4379-8157-3fec13171024", APIVersion:"batch/v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-scx59
W0827 14:48:15.619] E0827 14:48:15.342805   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:15.619] E0827 14:48:15.442693   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:15.620] E0827 14:48:15.559667   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:15.665] E0827 14:48:15.664857   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:15.766] Name:           test-job
I0827 14:48:15.766] Namespace:      test-jobs
I0827 14:48:15.767] Selector:       controller-uid=82392adb-b8e5-4379-8157-3fec13171024
I0827 14:48:15.767] Labels:         controller-uid=82392adb-b8e5-4379-8157-3fec13171024
I0827 14:48:15.767]                 job-name=test-job
I0827 14:48:15.767]                 run=pi
I0827 14:48:15.768] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0827 14:48:15.768] Controlled By:  CronJob/pi
I0827 14:48:15.768] Parallelism:    1
I0827 14:48:15.768] Completions:    1
I0827 14:48:15.768] Start Time:     Tue, 27 Aug 2019 14:48:15 +0000
I0827 14:48:15.769] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0827 14:48:15.769] Pod Template:
I0827 14:48:15.769]   Labels:  controller-uid=82392adb-b8e5-4379-8157-3fec13171024
I0827 14:48:15.769]            job-name=test-job
I0827 14:48:15.769]            run=pi
I0827 14:48:15.770]   Containers:
I0827 14:48:15.770]    pi:
... skipping 15 lines ...
I0827 14:48:15.773]   Type    Reason            Age   From            Message
I0827 14:48:15.774]   ----    ------            ----  ----            -------
I0827 14:48:15.774]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-scx59
I0827 14:48:15.774] job.batch "test-job" deleted
I0827 14:48:15.815] cronjob.batch "pi" deleted
I0827 14:48:15.906] namespace "test-jobs" deleted
W0827 14:48:16.345] E0827 14:48:16.344994   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:16.445] E0827 14:48:16.444418   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:16.562] E0827 14:48:16.561688   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:16.667] E0827 14:48:16.666534   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:17.347] E0827 14:48:17.347004   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:17.447] E0827 14:48:17.446438   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:17.564] E0827 14:48:17.563586   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:17.670] E0827 14:48:17.669813   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:18.349] E0827 14:48:18.348825   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:18.448] E0827 14:48:18.448087   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:18.567] E0827 14:48:18.566178   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:18.672] E0827 14:48:18.671572   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:19.351] E0827 14:48:19.350523   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:19.451] E0827 14:48:19.450646   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:19.568] E0827 14:48:19.567926   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:19.673] E0827 14:48:19.673097   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:20.352] E0827 14:48:20.352050   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:20.452] E0827 14:48:20.452128   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:20.570] E0827 14:48:20.569611   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:20.675] E0827 14:48:20.674496   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:21.065] +++ exit code: 0
I0827 14:48:21.115] Recording: run_create_job_tests
I0827 14:48:21.116] Running command: run_create_job_tests
I0827 14:48:21.144] 
I0827 14:48:21.147] +++ Running case: test-cmd.run_create_job_tests 
I0827 14:48:21.150] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:48:21.153] +++ command: run_create_job_tests
I0827 14:48:21.168] +++ [0827 14:48:21] Creating namespace namespace-1566917301-23764
I0827 14:48:21.245] namespace/namespace-1566917301-23764 created
I0827 14:48:21.319] Context "test" modified.
I0827 14:48:21.405] job.batch/test-job created
W0827 14:48:21.507] E0827 14:48:21.354115   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:21.507] I0827 14:48:21.401589   52919 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1566917301-23764", Name:"test-job", UID:"2700d807-547d-4ba0-9172-395bbacf8d17", APIVersion:"batch/v1", ResourceVersion:"1375", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-b5nfn
W0827 14:48:21.507] E0827 14:48:21.453948   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:21.572] E0827 14:48:21.571713   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:21.673] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I0827 14:48:21.673] (Bjob.batch "test-job" deleted
I0827 14:48:21.705] job.batch/test-job-pi created
W0827 14:48:21.806] E0827 14:48:21.676317   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:21.807] I0827 14:48:21.699097   52919 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1566917301-23764", Name:"test-job-pi", UID:"23507eef-6ed3-4f30-9737-db9eb862b9dd", APIVersion:"batch/v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-mk5sv
I0827 14:48:21.907] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I0827 14:48:21.908] (Bjob.batch "test-job-pi" deleted
I0827 14:48:21.987] cronjob.batch/test-pi created
W0827 14:48:22.088] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0827 14:48:22.093] I0827 14:48:22.092433   52919 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1566917301-23764", Name:"my-pi", UID:"e51e8bc8-d312-42b9-893c-9a8a5e1faec9", APIVersion:"batch/v1", ResourceVersion:"1391", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-2pbzr
I0827 14:48:22.193] job.batch/my-pi created
I0827 14:48:22.194] Successful
I0827 14:48:22.194] message:[perl -Mbignum=bpi -wle print bpi(10)]
I0827 14:48:22.194] has:perl -Mbignum=bpi -wle print bpi(10)
I0827 14:48:22.277] job.batch "my-pi" deleted
W0827 14:48:22.378] E0827 14:48:22.360045   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:22.456] E0827 14:48:22.455919   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:22.557] cronjob.batch "test-pi" deleted
I0827 14:48:22.557] +++ exit code: 0
I0827 14:48:22.557] Recording: run_pod_templates_tests
I0827 14:48:22.557] Running command: run_pod_templates_tests
I0827 14:48:22.558] 
I0827 14:48:22.558] +++ Running case: test-cmd.run_pod_templates_tests 
... skipping 2 lines ...
I0827 14:48:22.558] +++ [0827 14:48:22] Creating namespace namespace-1566917302-32657
I0827 14:48:22.625] namespace/namespace-1566917302-32657 created
I0827 14:48:22.705] Context "test" modified.
I0827 14:48:22.715] +++ [0827 14:48:22] Testing pod templates
I0827 14:48:22.814] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:22.991] (Bpodtemplate/nginx created
W0827 14:48:23.091] E0827 14:48:22.573792   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:23.092] E0827 14:48:22.677626   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:23.092] I0827 14:48:22.987898   49414 controller.go:606] quota admission added evaluator for: podtemplates
I0827 14:48:23.192] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0827 14:48:23.193] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0827 14:48:23.193] nginx   nginx        nginx    name=nginx
W0827 14:48:23.363] E0827 14:48:23.362010   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:23.458] E0827 14:48:23.457655   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:23.559] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0827 14:48:23.560] (Bpodtemplate "nginx" deleted
I0827 14:48:23.625] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:23.639] (B+++ exit code: 0
I0827 14:48:23.680] Recording: run_service_tests
I0827 14:48:23.680] Running command: run_service_tests
I0827 14:48:23.714] 
I0827 14:48:23.718] +++ Running case: test-cmd.run_service_tests 
I0827 14:48:23.722] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:48:23.725] +++ command: run_service_tests
I0827 14:48:23.822] Context "test" modified.
I0827 14:48:23.829] +++ [0827 14:48:23] Testing kubectl(v1:services)
W0827 14:48:23.930] E0827 14:48:23.576040   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:23.930] E0827 14:48:23.679196   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:24.031] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0827 14:48:24.129] (Bservice/redis-master created
I0827 14:48:24.258] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0827 14:48:24.419] (Bcore.sh:864: Successful describe services redis-master:
I0827 14:48:24.420] Name:              redis-master
I0827 14:48:24.420] Namespace:         default
... skipping 22 lines ...
I0827 14:48:24.540] Port:              <unset>  6379/TCP
I0827 14:48:24.540] TargetPort:        6379/TCP
I0827 14:48:24.540] Endpoints:         <none>
I0827 14:48:24.540] Session Affinity:  None
I0827 14:48:24.540] Events:            <none>
I0827 14:48:24.540] (B
W0827 14:48:24.641] E0827 14:48:24.363737   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:24.642] E0827 14:48:24.459558   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:24.642] E0827 14:48:24.577565   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:24.681] E0827 14:48:24.680807   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:24.782] core.sh:868: Successful describe
I0827 14:48:24.782] Name:              redis-master
I0827 14:48:24.782] Namespace:         default
I0827 14:48:24.782] Labels:            app=redis
I0827 14:48:24.782]                    role=master
I0827 14:48:24.783]                    tier=backend
... skipping 208 lines ...
I0827 14:48:25.584]     role: padawan
I0827 14:48:25.584]   sessionAffinity: None
I0827 14:48:25.584]   type: ClusterIP
I0827 14:48:25.584] status:
I0827 14:48:25.584]   loadBalancer: {}
I0827 14:48:25.669] service/redis-master selector updated
W0827 14:48:25.770] E0827 14:48:25.365547   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:25.770] E0827 14:48:25.461720   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:25.771] E0827 14:48:25.579155   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:25.771] E0827 14:48:25.682587   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:25.871] core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
I0827 14:48:25.878] (Bservice/redis-master selector updated
I0827 14:48:25.975] core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0827 14:48:26.059] (BapiVersion: v1
I0827 14:48:26.060] kind: Service
I0827 14:48:26.060] metadata:
... skipping 43 lines ...
I0827 14:48:26.068]   selector:
I0827 14:48:26.068]     role: padawan
I0827 14:48:26.068]   sessionAffinity: None
I0827 14:48:26.069]   type: ClusterIP
I0827 14:48:26.069] status:
I0827 14:48:26.069]   loadBalancer: {}
W0827 14:48:26.170] error: you must specify resources by --filename when --local is set.
W0827 14:48:26.171] Example resource specifications include:
W0827 14:48:26.171]    '-f rsrc.yaml'
W0827 14:48:26.171]    '--filename=rsrc.json'
I0827 14:48:26.272] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0827 14:48:26.431] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0827 14:48:26.533] (Bservice "redis-master" deleted
W0827 14:48:26.634] E0827 14:48:26.367803   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:26.635] E0827 14:48:26.463378   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:26.636] E0827 14:48:26.581491   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:26.685] E0827 14:48:26.684412   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:26.786] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0827 14:48:26.786] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0827 14:48:26.951] (Bservice/redis-master created
I0827 14:48:27.077] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0827 14:48:27.192] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0827 14:48:27.368] (Bservice/service-v1-test created
I0827 14:48:27.469] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0827 14:48:27.640] (Bservice/service-v1-test replaced
I0827 14:48:27.738] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0827 14:48:27.827] (Bservice "redis-master" deleted
W0827 14:48:27.929] E0827 14:48:27.369233   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:27.929] E0827 14:48:27.465854   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:27.929] E0827 14:48:27.583007   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:27.929] E0827 14:48:27.686004   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:28.030] service "service-v1-test" deleted
I0827 14:48:28.037] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0827 14:48:28.143] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0827 14:48:28.323] (Bservice/redis-master created
W0827 14:48:28.424] E0827 14:48:28.370993   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:28.467] E0827 14:48:28.467246   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:28.568] service/redis-slave created
I0827 14:48:28.618] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0827 14:48:28.705] (BSuccessful
I0827 14:48:28.705] message:NAME           RSRC
I0827 14:48:28.705] kubernetes     145
I0827 14:48:28.706] redis-master   1427
I0827 14:48:28.706] redis-slave    1430
I0827 14:48:28.706] has:redis-master
I0827 14:48:28.797] core.sh:979: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0827 14:48:28.894] (Bservice "redis-master" deleted
I0827 14:48:28.903] service "redis-slave" deleted
W0827 14:48:29.004] E0827 14:48:28.584436   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:29.004] E0827 14:48:28.687433   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:29.105] core.sh:986: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0827 14:48:29.115] (Bcore.sh:990: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0827 14:48:29.189] (Bservice/beep-boop created
I0827 14:48:29.307] core.sh:994: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0827 14:48:29.403] (Bcore.sh:998: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0827 14:48:29.487] (Bservice "beep-boop" deleted
I0827 14:48:29.589] core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0827 14:48:29.678] (Bcore.sh:1009: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:29.776] (Bservice/testmetadata created
I0827 14:48:29.777] deployment.apps/testmetadata created
W0827 14:48:29.877] E0827 14:48:29.372928   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:29.878] E0827 14:48:29.468938   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:29.879] E0827 14:48:29.585795   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:29.879] E0827 14:48:29.689229   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:29.880] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0827 14:48:29.880] I0827 14:48:29.761983   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"51e856e7-cef3-49e8-a406-a6dc39fe2bd1", APIVersion:"apps/v1", ResourceVersion:"1444", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6cdd84c77d to 2
W0827 14:48:29.881] I0827 14:48:29.767435   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"dd5a251f-1fd2-4699-8516-0f7bbd4ceabd", APIVersion:"apps/v1", ResourceVersion:"1445", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-v5959
W0827 14:48:29.882] I0827 14:48:29.771331   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"dd5a251f-1fd2-4699-8516-0f7bbd4ceabd", APIVersion:"apps/v1", ResourceVersion:"1445", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-6wntn
I0827 14:48:29.983] core.sh:1013: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
I0827 14:48:29.991] (Bcore.sh:1014: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
I0827 14:48:30.088] (Bservice/exposemetadata exposed
I0827 14:48:30.186] core.sh:1020: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
I0827 14:48:30.279] (Bservice "exposemetadata" deleted
I0827 14:48:30.293] service "testmetadata" deleted
W0827 14:48:30.394] E0827 14:48:30.374328   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:30.470] E0827 14:48:30.470297   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:30.571] deployment.apps "testmetadata" deleted
I0827 14:48:30.572] +++ exit code: 0
I0827 14:48:30.572] Recording: run_daemonset_tests
I0827 14:48:30.572] Running command: run_daemonset_tests
I0827 14:48:30.572] 
I0827 14:48:30.572] +++ Running case: test-cmd.run_daemonset_tests 
... skipping 25 lines ...
I0827 14:48:32.186] +++ working dir: /go/src/k8s.io/kubernetes
I0827 14:48:32.190] +++ command: run_daemonset_history_tests
I0827 14:48:32.206] +++ [0827 14:48:32] Creating namespace namespace-1566917312-17943
I0827 14:48:32.290] namespace/namespace-1566917312-17943 created
I0827 14:48:32.369] Context "test" modified.
I0827 14:48:32.376] +++ [0827 14:48:32] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
W0827 14:48:32.477] E0827 14:48:30.587232   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:32.477] E0827 14:48:30.690866   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:32.477] I0827 14:48:30.885179   49414 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0827 14:48:32.477] I0827 14:48:30.896814   49414 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0827 14:48:32.478] E0827 14:48:31.376113   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:32.478] E0827 14:48:31.473076   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:32.478] E0827 14:48:31.588491   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:32.478] E0827 14:48:31.692158   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:32.479] E0827 14:48:32.378100   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:32.479] E0827 14:48:32.474538   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:32.579] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:32.682] (Bdaemonset.apps/bind created
W0827 14:48:32.783] E0827 14:48:32.590194   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:32.784] E0827 14:48:32.694396   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:32.885] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1566917312-17943"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0827 14:48:32.885]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I0827 14:48:32.933] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I0827 14:48:33.042] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0827 14:48:33.145] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0827 14:48:33.336] (Bdaemonset.apps/bind configured
W0827 14:48:33.437] E0827 14:48:33.380149   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:33.477] E0827 14:48:33.476303   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:33.577] apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0827 14:48:33.585] (Bapps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:48:33.720] (Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0827 14:48:33.840] (Bapps.sh:80: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1566917312-17943"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0827 14:48:33.842]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1566917312-17943"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0827 14:48:33.842]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
... skipping 5 lines ...
I0827 14:48:33.964]     Port:	<none>
I0827 14:48:33.964]     Host Port:	<none>
I0827 14:48:33.964]     Environment:	<none>
I0827 14:48:33.964]     Mounts:	<none>
I0827 14:48:33.964]   Volumes:	<none>
I0827 14:48:33.964]  (dry run)
W0827 14:48:34.065] E0827 14:48:33.591974   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:34.065] E0827 14:48:33.696465   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:34.166] apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0827 14:48:34.192] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:48:34.297] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0827 14:48:34.425] (Bdaemonset.apps/bind rolled back
W0827 14:48:34.526] E0827 14:48:34.381848   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:34.526] E0827 14:48:34.477902   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:34.594] E0827 14:48:34.594030   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:34.695] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0827 14:48:34.696] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0827 14:48:34.756] (BSuccessful
I0827 14:48:34.756] message:error: unable to find specified revision 1000000 in history
I0827 14:48:34.756] has:unable to find specified revision
W0827 14:48:34.857] E0827 14:48:34.698230   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:34.958] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0827 14:48:34.982] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0827 14:48:35.095] (Bdaemonset.apps/bind rolled back
W0827 14:48:35.200] E0827 14:48:35.116914   52919 daemon_controller.go:302] namespace-1566917312-17943/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1566917312-17943", SelfLink:"/apis/apps/v1/namespaces/namespace-1566917312-17943/daemonsets/bind", UID:"46f3b6f8-5885-4c55-a047-f10049263ba5", ResourceVersion:"1515", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63702514112, loc:(*time.Location)(0x73d5240)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1566917312-17943\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0010e8420), Fields:(*v1.Fields)(0xc0010e8460)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0010e84a0), Fields:(*v1.Fields)(0xc0010e84c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0010e84e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d9b808), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f6e240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0010e8500), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000c72b90)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001d9b86c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
I0827 14:48:35.301] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0827 14:48:35.306] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:48:35.405] (Bapps.sh:99: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0827 14:48:35.484] (Bdaemonset.apps "bind" deleted
I0827 14:48:35.508] +++ exit code: 0
I0827 14:48:35.547] Recording: run_rc_tests
... skipping 9 lines ...
I0827 14:48:35.864] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:36.017] (Breplicationcontroller/frontend created
I0827 14:48:36.114] replicationcontroller "frontend" deleted
I0827 14:48:36.223] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:36.328] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:36.490] (Breplicationcontroller/frontend created
W0827 14:48:36.590] E0827 14:48:35.383291   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:36.591] E0827 14:48:35.479676   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:36.591] E0827 14:48:35.595618   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:36.591] E0827 14:48:35.700144   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:36.592] I0827 14:48:36.025143   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"ff1f00c4-877f-4666-8dd4-c82c43ee7241", APIVersion:"v1", ResourceVersion:"1524", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-sdzkl
W0827 14:48:36.592] I0827 14:48:36.030686   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"ff1f00c4-877f-4666-8dd4-c82c43ee7241", APIVersion:"v1", ResourceVersion:"1524", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xvmhf
W0827 14:48:36.592] I0827 14:48:36.031208   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"ff1f00c4-877f-4666-8dd4-c82c43ee7241", APIVersion:"v1", ResourceVersion:"1524", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6q7wx
W0827 14:48:36.593] E0827 14:48:36.384979   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:36.593] E0827 14:48:36.481366   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:36.593] I0827 14:48:36.497077   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"cd25557d-3ee6-4390-9beb-286a0c04c4b3", APIVersion:"v1", ResourceVersion:"1540", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dxngp
W0827 14:48:36.593] I0827 14:48:36.501506   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"cd25557d-3ee6-4390-9beb-286a0c04c4b3", APIVersion:"v1", ResourceVersion:"1540", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lgnz2
W0827 14:48:36.594] I0827 14:48:36.502715   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"cd25557d-3ee6-4390-9beb-286a0c04c4b3", APIVersion:"v1", ResourceVersion:"1540", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-82rvn
W0827 14:48:36.597] E0827 14:48:36.597380   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:36.699] core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0827 14:48:36.769] (Bcore.sh:1061: Successful describe rc frontend:
I0827 14:48:36.769] Name:         frontend
I0827 14:48:36.769] Namespace:    namespace-1566917315-6333
I0827 14:48:36.769] Selector:     app=guestbook,tier=frontend
I0827 14:48:36.769] Labels:       app=guestbook
I0827 14:48:36.769]               tier=frontend
I0827 14:48:36.769] Annotations:  <none>
I0827 14:48:36.769] Replicas:     3 current / 3 desired
I0827 14:48:36.770] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:48:36.770] Pod Template:
I0827 14:48:36.770]   Labels:  app=guestbook
I0827 14:48:36.770]            tier=frontend
I0827 14:48:36.770]   Containers:
I0827 14:48:36.770]    php-redis:
I0827 14:48:36.770]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0827 14:48:36.896] Namespace:    namespace-1566917315-6333
I0827 14:48:36.896] Selector:     app=guestbook,tier=frontend
I0827 14:48:36.896] Labels:       app=guestbook
I0827 14:48:36.896]               tier=frontend
I0827 14:48:36.897] Annotations:  <none>
I0827 14:48:36.897] Replicas:     3 current / 3 desired
I0827 14:48:36.897] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:48:36.897] Pod Template:
I0827 14:48:36.897]   Labels:  app=guestbook
I0827 14:48:36.897]            tier=frontend
I0827 14:48:36.897]   Containers:
I0827 14:48:36.897]    php-redis:
I0827 14:48:36.897]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0827 14:48:36.898]   Type    Reason            Age   From                    Message
I0827 14:48:36.898]   ----    ------            ----  ----                    -------
I0827 14:48:36.899]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-dxngp
I0827 14:48:36.899]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-lgnz2
I0827 14:48:36.899]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-82rvn
I0827 14:48:36.899] (B
W0827 14:48:37.000] E0827 14:48:36.701618   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:37.100] core.sh:1065: Successful describe
I0827 14:48:37.100] Name:         frontend
I0827 14:48:37.101] Namespace:    namespace-1566917315-6333
I0827 14:48:37.101] Selector:     app=guestbook,tier=frontend
I0827 14:48:37.101] Labels:       app=guestbook
I0827 14:48:37.101]               tier=frontend
I0827 14:48:37.101] Annotations:  <none>
I0827 14:48:37.101] Replicas:     3 current / 3 desired
I0827 14:48:37.101] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:48:37.101] Pod Template:
I0827 14:48:37.101]   Labels:  app=guestbook
I0827 14:48:37.101]            tier=frontend
I0827 14:48:37.102]   Containers:
I0827 14:48:37.102]    php-redis:
I0827 14:48:37.102]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0827 14:48:37.136] Namespace:    namespace-1566917315-6333
I0827 14:48:37.136] Selector:     app=guestbook,tier=frontend
I0827 14:48:37.136] Labels:       app=guestbook
I0827 14:48:37.136]               tier=frontend
I0827 14:48:37.136] Annotations:  <none>
I0827 14:48:37.136] Replicas:     3 current / 3 desired
I0827 14:48:37.136] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:48:37.136] Pod Template:
I0827 14:48:37.136]   Labels:  app=guestbook
I0827 14:48:37.136]            tier=frontend
I0827 14:48:37.137]   Containers:
I0827 14:48:37.137]    php-redis:
I0827 14:48:37.137]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0827 14:48:37.298] Namespace:    namespace-1566917315-6333
I0827 14:48:37.298] Selector:     app=guestbook,tier=frontend
I0827 14:48:37.298] Labels:       app=guestbook
I0827 14:48:37.298]               tier=frontend
I0827 14:48:37.298] Annotations:  <none>
I0827 14:48:37.298] Replicas:     3 current / 3 desired
I0827 14:48:37.298] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:48:37.298] Pod Template:
I0827 14:48:37.299]   Labels:  app=guestbook
I0827 14:48:37.299]            tier=frontend
I0827 14:48:37.299]   Containers:
I0827 14:48:37.299]    php-redis:
I0827 14:48:37.299]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0827 14:48:37.416] Namespace:    namespace-1566917315-6333
I0827 14:48:37.416] Selector:     app=guestbook,tier=frontend
I0827 14:48:37.416] Labels:       app=guestbook
I0827 14:48:37.417]               tier=frontend
I0827 14:48:37.417] Annotations:  <none>
I0827 14:48:37.417] Replicas:     3 current / 3 desired
I0827 14:48:37.417] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:48:37.417] Pod Template:
I0827 14:48:37.417]   Labels:  app=guestbook
I0827 14:48:37.418]            tier=frontend
I0827 14:48:37.418]   Containers:
I0827 14:48:37.418]    php-redis:
I0827 14:48:37.418]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0827 14:48:37.534] Namespace:    namespace-1566917315-6333
I0827 14:48:37.534] Selector:     app=guestbook,tier=frontend
I0827 14:48:37.535] Labels:       app=guestbook
I0827 14:48:37.535]               tier=frontend
I0827 14:48:37.535] Annotations:  <none>
I0827 14:48:37.535] Replicas:     3 current / 3 desired
I0827 14:48:37.536] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:48:37.536] Pod Template:
I0827 14:48:37.536]   Labels:  app=guestbook
I0827 14:48:37.537]            tier=frontend
I0827 14:48:37.537]   Containers:
I0827 14:48:37.537]    php-redis:
I0827 14:48:37.537]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0827 14:48:37.655] Namespace:    namespace-1566917315-6333
I0827 14:48:37.655] Selector:     app=guestbook,tier=frontend
I0827 14:48:37.655] Labels:       app=guestbook
I0827 14:48:37.655]               tier=frontend
I0827 14:48:37.655] Annotations:  <none>
I0827 14:48:37.656] Replicas:     3 current / 3 desired
I0827 14:48:37.656] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:48:37.656] Pod Template:
I0827 14:48:37.656]   Labels:  app=guestbook
I0827 14:48:37.656]            tier=frontend
I0827 14:48:37.656]   Containers:
I0827 14:48:37.656]    php-redis:
I0827 14:48:37.656]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 21 lines ...
I0827 14:48:38.400] (Breplicationcontroller/frontend scaled
I0827 14:48:38.497] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0827 14:48:38.595] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0827 14:48:38.680] (Breplicationcontroller/frontend scaled
I0827 14:48:38.785] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0827 14:48:38.872] (Breplicationcontroller "frontend" deleted
W0827 14:48:38.973] E0827 14:48:37.386793   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:38.974] E0827 14:48:37.483069   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:38.975] E0827 14:48:37.599291   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:38.975] E0827 14:48:37.703406   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:38.976] I0827 14:48:37.852699   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"cd25557d-3ee6-4390-9beb-286a0c04c4b3", APIVersion:"v1", ResourceVersion:"1550", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-dxngp
W0827 14:48:38.976] error: Expected replicas to be 3, was 2
W0827 14:48:38.976] E0827 14:48:38.389271   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:38.977] I0827 14:48:38.404620   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"cd25557d-3ee6-4390-9beb-286a0c04c4b3", APIVersion:"v1", ResourceVersion:"1556", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-g647s
W0827 14:48:38.978] E0827 14:48:38.484509   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:38.978] E0827 14:48:38.600728   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:38.979] I0827 14:48:38.687584   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"cd25557d-3ee6-4390-9beb-286a0c04c4b3", APIVersion:"v1", ResourceVersion:"1561", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-g647s
W0827 14:48:38.979] E0827 14:48:38.705092   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:39.068] I0827 14:48:39.067705   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-master", UID:"f1aa3a6b-3436-4c7f-9b8f-f7ed5d9607fd", APIVersion:"v1", ResourceVersion:"1573", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-d722w
I0827 14:48:39.170] replicationcontroller/redis-master created
I0827 14:48:39.253] replicationcontroller/redis-slave created
W0827 14:48:39.354] I0827 14:48:39.258721   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-slave", UID:"e96344be-c9e8-42fa-8bae-deb93b0532c3", APIVersion:"v1", ResourceVersion:"1579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-gg6xj
W0827 14:48:39.355] I0827 14:48:39.263422   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-slave", UID:"e96344be-c9e8-42fa-8bae-deb93b0532c3", APIVersion:"v1", ResourceVersion:"1579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-zdrl2
W0827 14:48:39.361] I0827 14:48:39.360165   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-master", UID:"f1aa3a6b-3436-4c7f-9b8f-f7ed5d9607fd", APIVersion:"v1", ResourceVersion:"1586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-k8n6v
W0827 14:48:39.365] I0827 14:48:39.364543   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-master", UID:"f1aa3a6b-3436-4c7f-9b8f-f7ed5d9607fd", APIVersion:"v1", ResourceVersion:"1586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-72m2n
W0827 14:48:39.377] I0827 14:48:39.376331   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-master", UID:"f1aa3a6b-3436-4c7f-9b8f-f7ed5d9607fd", APIVersion:"v1", ResourceVersion:"1586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-ksbd9
W0827 14:48:39.378] I0827 14:48:39.376478   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-slave", UID:"e96344be-c9e8-42fa-8bae-deb93b0532c3", APIVersion:"v1", ResourceVersion:"1589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-b7cmq
W0827 14:48:39.383] I0827 14:48:39.382755   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-slave", UID:"e96344be-c9e8-42fa-8bae-deb93b0532c3", APIVersion:"v1", ResourceVersion:"1589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-fk9nz
W0827 14:48:39.391] E0827 14:48:39.391241   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:39.487] E0827 14:48:39.486444   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:39.588] replicationcontroller/redis-master scaled
I0827 14:48:39.588] replicationcontroller/redis-slave scaled
I0827 14:48:39.588] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
I0827 14:48:39.589] (Bcore.sh:1118: Successful get rc redis-slave {{.spec.replicas}}: 4
I0827 14:48:39.667] (Breplicationcontroller "redis-master" deleted
I0827 14:48:39.676] replicationcontroller "redis-slave" deleted
W0827 14:48:39.777] E0827 14:48:39.602663   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:39.778] E0827 14:48:39.707092   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:39.878] deployment.apps/nginx-deployment created
W0827 14:48:39.980] I0827 14:48:39.882160   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment", UID:"26b89981-b931-4a0d-8d29-588a00aa4c58", APIVersion:"apps/v1", ResourceVersion:"1620", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-66987bfc58 to 3
W0827 14:48:39.981] I0827 14:48:39.887236   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-66987bfc58", UID:"6217564f-7acf-4342-a5d2-1adda57b3038", APIVersion:"apps/v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-xg2vv
W0827 14:48:39.982] I0827 14:48:39.892786   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-66987bfc58", UID:"6217564f-7acf-4342-a5d2-1adda57b3038", APIVersion:"apps/v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-7px5m
W0827 14:48:39.982] I0827 14:48:39.893160   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-66987bfc58", UID:"6217564f-7acf-4342-a5d2-1adda57b3038", APIVersion:"apps/v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-xzb72
W0827 14:48:40.002] I0827 14:48:40.001394   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment", UID:"26b89981-b931-4a0d-8d29-588a00aa4c58", APIVersion:"apps/v1", ResourceVersion:"1634", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-66987bfc58 to 1
... skipping 3 lines ...
I0827 14:48:40.115] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I0827 14:48:40.194] (Bdeployment.apps "nginx-deployment" deleted
I0827 14:48:40.315] Successful
I0827 14:48:40.315] message:service/expose-test-deployment exposed
I0827 14:48:40.315] has:service/expose-test-deployment exposed
I0827 14:48:40.412] service "expose-test-deployment" deleted
W0827 14:48:40.512] E0827 14:48:40.393103   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:40.513] E0827 14:48:40.488127   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:40.608] E0827 14:48:40.604138   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:40.692] I0827 14:48:40.692175   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment", UID:"f46ecf14-8482-41d5-a773-1800cbd43e08", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-66987bfc58 to 3
W0827 14:48:40.697] I0827 14:48:40.696757   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-66987bfc58", UID:"ce0471eb-2cab-45bc-ad36-518dca0e5359", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-gzzz9
W0827 14:48:40.703] I0827 14:48:40.702293   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-66987bfc58", UID:"ce0471eb-2cab-45bc-ad36-518dca0e5359", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-bt2s5
W0827 14:48:40.704] I0827 14:48:40.703342   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-66987bfc58", UID:"ce0471eb-2cab-45bc-ad36-518dca0e5359", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-b7p5h
W0827 14:48:40.709] E0827 14:48:40.708416   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:40.809] Successful
I0827 14:48:40.810] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0827 14:48:40.811] See 'kubectl expose -h' for help and examples
I0827 14:48:40.811] has:invalid deployment: no selectors
I0827 14:48:40.811] deployment.apps/nginx-deployment created
I0827 14:48:40.812] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0827 14:48:40.896] (Bservice/nginx-deployment exposed
I0827 14:48:41.008] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0827 14:48:41.091] (Bdeployment.apps "nginx-deployment" deleted
I0827 14:48:41.103] service "nginx-deployment" deleted
I0827 14:48:41.305] replicationcontroller/frontend created
W0827 14:48:41.407] I0827 14:48:41.306288   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"3651447f-a9fa-4b43-895f-6cd0a9a12c02", APIVersion:"v1", ResourceVersion:"1680", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j26h9
W0827 14:48:41.408] I0827 14:48:41.311322   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"3651447f-a9fa-4b43-895f-6cd0a9a12c02", APIVersion:"v1", ResourceVersion:"1680", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tqgtt
W0827 14:48:41.409] I0827 14:48:41.312561   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"3651447f-a9fa-4b43-895f-6cd0a9a12c02", APIVersion:"v1", ResourceVersion:"1680", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-562g4
W0827 14:48:41.409] E0827 14:48:41.394955   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:41.490] E0827 14:48:41.489660   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:41.591] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I0827 14:48:41.591] (Bservice/frontend exposed
I0827 14:48:41.613] core.sh:1161: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0827 14:48:41.709] (Bservice/frontend-2 exposed
I0827 14:48:41.811] core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
I0827 14:48:41.988] (Bpod/valid-pod created
W0827 14:48:42.088] E0827 14:48:41.614458   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:42.089] E0827 14:48:41.710621   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:42.189] service/frontend-3 exposed
I0827 14:48:42.198] core.sh:1170: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
I0827 14:48:42.311] (Bservice/frontend-4 exposed
W0827 14:48:42.412] E0827 14:48:42.396803   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:42.492] E0827 14:48:42.491745   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:42.593] core.sh:1174: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
I0827 14:48:42.593] (Bservice/frontend-5 exposed
I0827 14:48:42.632] core.sh:1178: Successful get service frontend-5 {{(index .spec.ports 0).port}}: 80
I0827 14:48:42.730] (Bpod "valid-pod" deleted
W0827 14:48:42.831] E0827 14:48:42.616141   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:42.831] E0827 14:48:42.712231   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:42.932] service "frontend" deleted
I0827 14:48:42.932] service "frontend-2" deleted
I0827 14:48:42.932] service "frontend-3" deleted
I0827 14:48:42.932] service "frontend-4" deleted
I0827 14:48:42.933] service "frontend-5" deleted
I0827 14:48:42.991] Successful
I0827 14:48:42.992] message:error: cannot expose a Node
I0827 14:48:42.992] has:cannot expose
I0827 14:48:43.093] Successful
I0827 14:48:43.094] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0827 14:48:43.094] has:metadata.name: Invalid value
I0827 14:48:43.199] Successful
I0827 14:48:43.199] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 4 lines ...
I0827 14:48:43.390] has:etcd-server exposed
I0827 14:48:43.481] core.sh:1208: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
I0827 14:48:43.577] (Bcore.sh:1209: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
I0827 14:48:43.667] (Bservice "etcd-server" deleted
I0827 14:48:43.765] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0827 14:48:43.852] (Breplicationcontroller "frontend" deleted
W0827 14:48:43.953] E0827 14:48:43.402421   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:43.953] E0827 14:48:43.493847   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:43.954] E0827 14:48:43.617649   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:43.954] E0827 14:48:43.713927   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:44.054] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:44.060] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:44.222] (Breplicationcontroller/frontend created
W0827 14:48:44.323] I0827 14:48:44.227695   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"fb15cc7f-ee31-48c8-ac49-aa2dc226e274", APIVersion:"v1", ResourceVersion:"1749", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-t4jr5
W0827 14:48:44.324] I0827 14:48:44.231436   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"fb15cc7f-ee31-48c8-ac49-aa2dc226e274", APIVersion:"v1", ResourceVersion:"1749", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pwrmm
W0827 14:48:44.324] I0827 14:48:44.232887   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"fb15cc7f-ee31-48c8-ac49-aa2dc226e274", APIVersion:"v1", ResourceVersion:"1749", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xg7dk
W0827 14:48:44.403] I0827 14:48:44.402554   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-slave", UID:"57a15efc-8864-4395-a4ed-6e96ba02226c", APIVersion:"v1", ResourceVersion:"1758", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-q8xxw
W0827 14:48:44.406] E0827 14:48:44.405567   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:44.407] I0827 14:48:44.406745   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"redis-slave", UID:"57a15efc-8864-4395-a4ed-6e96ba02226c", APIVersion:"v1", ResourceVersion:"1758", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-9z927
W0827 14:48:44.495] E0827 14:48:44.495236   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:44.596] replicationcontroller/redis-slave created
I0827 14:48:44.597] core.sh:1228: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I0827 14:48:44.597] (Bcore.sh:1232: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I0827 14:48:44.680] (Breplicationcontroller "frontend" deleted
I0827 14:48:44.685] replicationcontroller "redis-slave" deleted
W0827 14:48:44.785] E0827 14:48:44.619570   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:44.786] E0827 14:48:44.715305   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:44.886] core.sh:1236: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:44.887] (Bcore.sh:1240: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:45.051] (Breplicationcontroller/frontend created
W0827 14:48:45.152] I0827 14:48:45.056802   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"eef00bf1-a62b-4498-9e70-638a901f9ed7", APIVersion:"v1", ResourceVersion:"1777", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-btnm6
W0827 14:48:45.153] I0827 14:48:45.060581   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"eef00bf1-a62b-4498-9e70-638a901f9ed7", APIVersion:"v1", ResourceVersion:"1777", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kbkks
W0827 14:48:45.153] I0827 14:48:45.061083   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1566917315-6333", Name:"frontend", UID:"eef00bf1-a62b-4498-9e70-638a901f9ed7", APIVersion:"v1", ResourceVersion:"1777", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-67m8p
I0827 14:48:45.254] core.sh:1243: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0827 14:48:45.254] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0827 14:48:45.326] core.sh:1246: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0827 14:48:45.408] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0827 14:48:45.499] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0827 14:48:45.595] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0827 14:48:45.672] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0827 14:48:45.773] E0827 14:48:45.408121   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:45.774] E0827 14:48:45.497419   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:45.774] E0827 14:48:45.621359   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:45.775] E0827 14:48:45.716913   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:45.775] Error: required flag(s) "max" not set
W0827 14:48:45.775] 
W0827 14:48:45.775] 
W0827 14:48:45.775] Examples:
W0827 14:48:45.776]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0827 14:48:45.776]   kubectl autoscale deployment foo --min=2 --max=10
W0827 14:48:45.776]   
... skipping 54 lines ...
I0827 14:48:45.995]           limits:
I0827 14:48:45.995]             cpu: 300m
I0827 14:48:45.995]           requests:
I0827 14:48:45.995]             cpu: 300m
I0827 14:48:45.995]       terminationGracePeriodSeconds: 0
I0827 14:48:45.995] status: {}
W0827 14:48:46.096] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0827 14:48:46.259] deployment.apps/nginx-deployment-resources created
W0827 14:48:46.360] I0827 14:48:46.262987   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources", UID:"7467735f-2b3e-41d8-9699-4d2797d80c2b", APIVersion:"apps/v1", ResourceVersion:"1798", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6dbb5769d7 to 3
W0827 14:48:46.360] I0827 14:48:46.267347   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources-6dbb5769d7", UID:"129f59ba-18a2-49e1-a556-efef537f4046", APIVersion:"apps/v1", ResourceVersion:"1799", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6dbb5769d7-n6zgf
W0827 14:48:46.361] I0827 14:48:46.270297   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources-6dbb5769d7", UID:"129f59ba-18a2-49e1-a556-efef537f4046", APIVersion:"apps/v1", ResourceVersion:"1799", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6dbb5769d7-9vmrk
W0827 14:48:46.361] I0827 14:48:46.270768   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources-6dbb5769d7", UID:"129f59ba-18a2-49e1-a556-efef537f4046", APIVersion:"apps/v1", ResourceVersion:"1799", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6dbb5769d7-s2zcc
W0827 14:48:46.410] E0827 14:48:46.409696   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:46.499] E0827 14:48:46.498654   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:46.600] core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0827 14:48:46.600] (Bcore.sh:1266: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:48:46.600] (Bcore.sh:1267: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0827 14:48:46.688] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0827 14:48:46.789] E0827 14:48:46.622767   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:46.789] I0827 14:48:46.693657   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources", UID:"7467735f-2b3e-41d8-9699-4d2797d80c2b", APIVersion:"apps/v1", ResourceVersion:"1812", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-58d7fb85cf to 1
W0827 14:48:46.789] I0827 14:48:46.699389   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources-58d7fb85cf", UID:"1621ff18-09c2-45b3-a6b2-445b2e98db87", APIVersion:"apps/v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-58d7fb85cf-dtz7k
W0827 14:48:46.790] E0827 14:48:46.718004   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:46.890] core.sh:1270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0827 14:48:46.891] (Bcore.sh:1271: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0827 14:48:47.100] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0827 14:48:47.201] error: unable to find container named redis
W0827 14:48:47.202] I0827 14:48:47.118329   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources", UID:"7467735f-2b3e-41d8-9699-4d2797d80c2b", APIVersion:"apps/v1", ResourceVersion:"1822", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-58d7fb85cf to 0
W0827 14:48:47.202] I0827 14:48:47.123124   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources-58d7fb85cf", UID:"1621ff18-09c2-45b3-a6b2-445b2e98db87", APIVersion:"apps/v1", ResourceVersion:"1826", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-58d7fb85cf-dtz7k
W0827 14:48:47.203] I0827 14:48:47.132738   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources", UID:"7467735f-2b3e-41d8-9699-4d2797d80c2b", APIVersion:"apps/v1", ResourceVersion:"1825", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5cd64dc74f to 1
W0827 14:48:47.203] I0827 14:48:47.140865   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources-5cd64dc74f", UID:"fa26c23b-6c87-4046-bd9f-f4ffd9356143", APIVersion:"apps/v1", ResourceVersion:"1832", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5cd64dc74f-ts77p
I0827 14:48:47.304] core.sh:1276: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0827 14:48:47.331] (Bcore.sh:1277: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0827 14:48:47.423] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0827 14:48:47.523] E0827 14:48:47.411060   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:47.524] I0827 14:48:47.439789   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources", UID:"7467735f-2b3e-41d8-9699-4d2797d80c2b", APIVersion:"apps/v1", ResourceVersion:"1844", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-6dbb5769d7 to 2
W0827 14:48:47.524] I0827 14:48:47.447543   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources-6dbb5769d7", UID:"129f59ba-18a2-49e1-a556-efef537f4046", APIVersion:"apps/v1", ResourceVersion:"1848", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-6dbb5769d7-n6zgf
W0827 14:48:47.524] I0827 14:48:47.450335   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources", UID:"7467735f-2b3e-41d8-9699-4d2797d80c2b", APIVersion:"apps/v1", ResourceVersion:"1847", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-8586dd678 to 1
W0827 14:48:47.525] I0827 14:48:47.457412   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917315-6333", Name:"nginx-deployment-resources-8586dd678", UID:"bd67393d-1353-4e91-8415-9ea37d46fc23", APIVersion:"apps/v1", ResourceVersion:"1852", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8586dd678-g95vz
W0827 14:48:47.525] E0827 14:48:47.500090   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:47.625] E0827 14:48:47.624265   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:47.720] E0827 14:48:47.719336   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:47.820] core.sh:1280: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0827 14:48:47.821] (Bcore.sh:1281: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0827 14:48:47.821] (Bcore.sh:1282: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
I0827 14:48:47.831] (BapiVersion: apps/v1
I0827 14:48:47.832] kind: Deployment
I0827 14:48:47.832] metadata:
... skipping 169 lines ...
I0827 14:48:47.864]     status: "True"
I0827 14:48:47.864]     type: Progressing
I0827 14:48:47.864]   observedGeneration: 4
I0827 14:48:47.864]   replicas: 4
I0827 14:48:47.864]   unavailableReplicas: 4
I0827 14:48:47.864]   updatedReplicas: 1
W0827 14:48:47.965] error: you must specify resources by --filename when --local is set.
W0827 14:48:47.965] Example resource specifications include:
W0827 14:48:47.965]    '-f rsrc.yaml'
W0827 14:48:47.965]    '--filename=rsrc.json'
I0827 14:48:48.066] core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0827 14:48:48.103] (Bcore.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0827 14:48:48.193] (Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 23 lines ...
I0827 14:48:49.221] (BSuccessful
I0827 14:48:49.221] message:10
I0827 14:48:49.221] has:10
I0827 14:48:49.306] Successful
I0827 14:48:49.306] message:apps/v1
I0827 14:48:49.306] has:apps/v1
W0827 14:48:49.407] E0827 14:48:48.412420   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:49.407] E0827 14:48:48.501662   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:49.407] E0827 14:48:48.625415   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:49.408] I0827 14:48:48.631219   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"test-nginx-extensions", UID:"a3fc4fce-991d-46a7-a275-092df79bf280", APIVersion:"apps/v1", ResourceVersion:"1880", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-574b6dd4f9 to 1
W0827 14:48:49.408] I0827 14:48:48.636912   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"test-nginx-extensions-574b6dd4f9", UID:"76ebf205-183a-4aaa-9cc8-3a5817f56439", APIVersion:"apps/v1", ResourceVersion:"1881", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-574b6dd4f9-4pxc5
W0827 14:48:49.408] E0827 14:48:48.720914   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:49.409] I0827 14:48:49.051055   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"test-nginx-apps", UID:"02c70951-e705-4fa8-bb81-e66045c87734", APIVersion:"apps/v1", ResourceVersion:"1895", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-7fb7df9785 to 1
W0827 14:48:49.409] I0827 14:48:49.055119   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"test-nginx-apps-7fb7df9785", UID:"29158773-7584-45c4-9007-3dedeaa95727", APIVersion:"apps/v1", ResourceVersion:"1896", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-apps-7fb7df9785-dql76
W0827 14:48:49.414] E0827 14:48:49.414112   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:49.503] E0827 14:48:49.503254   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:49.604] Successful describe rs:
I0827 14:48:49.604] Name:           test-nginx-apps-7fb7df9785
I0827 14:48:49.605] Namespace:      namespace-1566917328-21661
I0827 14:48:49.605] Selector:       app=test-nginx-apps,pod-template-hash=7fb7df9785
I0827 14:48:49.605] Labels:         app=test-nginx-apps
I0827 14:48:49.605]                 pod-template-hash=7fb7df9785
I0827 14:48:49.605] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0827 14:48:49.605]                 deployment.kubernetes.io/max-replicas: 2
I0827 14:48:49.605]                 deployment.kubernetes.io/revision: 1
I0827 14:48:49.605] Controlled By:  Deployment/test-nginx-apps
I0827 14:48:49.605] Replicas:       1 current / 1 desired
I0827 14:48:49.605] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0827 14:48:49.606] Pod Template:
I0827 14:48:49.606]   Labels:  app=test-nginx-apps
I0827 14:48:49.606]            pod-template-hash=7fb7df9785
I0827 14:48:49.606]   Containers:
I0827 14:48:49.606]    nginx:
I0827 14:48:49.606]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 38 lines ...
I0827 14:48:50.105] apps.sh:224: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:50.260] (Bdeployment.apps/deployment-with-unixuserid created
I0827 14:48:50.362] apps.sh:228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
I0827 14:48:50.444] (Bdeployment.apps "deployment-with-unixuserid" deleted
I0827 14:48:50.544] apps.sh:235: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:50.703] (Bdeployment.apps/nginx-deployment created
W0827 14:48:50.804] E0827 14:48:49.626751   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:50.804] E0827 14:48:49.722357   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:50.805] I0827 14:48:49.838002   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-with-command", UID:"b1b329dc-6687-4672-8659-bda98cbb981c", APIVersion:"apps/v1", ResourceVersion:"1910", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-78b77c48d8 to 1
W0827 14:48:50.805] I0827 14:48:49.841825   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-with-command-78b77c48d8", UID:"a6e69aed-a1de-4f09-9e31-1ef192cdf9d0", APIVersion:"apps/v1", ResourceVersion:"1911", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-78b77c48d8-z4m79
W0827 14:48:50.806] I0827 14:48:50.264531   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"deployment-with-unixuserid", UID:"78b07e10-5d3c-4f7f-93fc-66250d83743d", APIVersion:"apps/v1", ResourceVersion:"1924", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-6f4d54669 to 1
W0827 14:48:50.806] I0827 14:48:50.268496   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"deployment-with-unixuserid-6f4d54669", UID:"0fd0a9bb-f497-43b7-a3d1-3e5712073934", APIVersion:"apps/v1", ResourceVersion:"1925", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-6f4d54669-9x9ln
W0827 14:48:50.807] E0827 14:48:50.415458   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:50.807] E0827 14:48:50.505682   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:50.807] E0827 14:48:50.628387   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:50.808] I0827 14:48:50.709449   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"459ceb38-cfab-46c1-b419-ad99011a4bb3", APIVersion:"apps/v1", ResourceVersion:"1938", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-66987bfc58 to 3
W0827 14:48:50.808] I0827 14:48:50.712775   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-66987bfc58", UID:"2fba59eb-f90b-4472-ae3b-4d1fa98b75bd", APIVersion:"apps/v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-zs94j
W0827 14:48:50.808] I0827 14:48:50.716656   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-66987bfc58", UID:"2fba59eb-f90b-4472-ae3b-4d1fa98b75bd", APIVersion:"apps/v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-n57c2
W0827 14:48:50.809] I0827 14:48:50.717265   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-66987bfc58", UID:"2fba59eb-f90b-4472-ae3b-4d1fa98b75bd", APIVersion:"apps/v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-99k8g
W0827 14:48:50.809] E0827 14:48:50.723770   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:50.910] apps.sh:239: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3
I0827 14:48:50.910] (Bdeployment.apps "nginx-deployment" deleted
I0827 14:48:51.015] apps.sh:242: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:51.110] (Bapps.sh:246: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:51.205] (Bapps.sh:247: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:51.294] (Bdeployment.apps/nginx-deployment created
W0827 14:48:51.396] I0827 14:48:51.298508   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"bd79922e-2a86-40e6-aa0a-2facafe9564e", APIVersion:"apps/v1", ResourceVersion:"1961", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6fd788478f to 1
W0827 14:48:51.396] I0827 14:48:51.302700   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-6fd788478f", UID:"110e2b5c-8010-4a2d-b113-f5f40714fd7f", APIVersion:"apps/v1", ResourceVersion:"1962", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6fd788478f-ntrgp
W0827 14:48:51.417] E0827 14:48:51.416779   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:51.507] E0827 14:48:51.507163   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:51.609] apps.sh:251: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I0827 14:48:51.609] (Bdeployment.apps "nginx-deployment" deleted
I0827 14:48:51.619] apps.sh:256: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:51.727] (Bapps.sh:257: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I0827 14:48:51.901] (Breplicaset.apps "nginx-deployment-6fd788478f" deleted
I0827 14:48:52.002] apps.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:52.160] (Bdeployment.apps/nginx-deployment created
I0827 14:48:52.264] apps.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
I0827 14:48:52.356] (Bhorizontalpodautoscaler.autoscaling/nginx-deployment autoscaled
W0827 14:48:52.457] E0827 14:48:51.630525   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:52.458] E0827 14:48:51.725138   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:52.458] I0827 14:48:52.165262   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"dc5903e6-99eb-46f4-b43f-dac7d1a82bc4", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-66987bfc58 to 3
W0827 14:48:52.458] I0827 14:48:52.170601   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-66987bfc58", UID:"de2a95aa-dde2-4b58-8089-10cde2d4e794", APIVersion:"apps/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-7r97w
W0827 14:48:52.459] I0827 14:48:52.176051   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-66987bfc58", UID:"de2a95aa-dde2-4b58-8089-10cde2d4e794", APIVersion:"apps/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-9d5k6
W0827 14:48:52.459] I0827 14:48:52.176854   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-66987bfc58", UID:"de2a95aa-dde2-4b58-8089-10cde2d4e794", APIVersion:"apps/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-hjqtq
W0827 14:48:52.459] E0827 14:48:52.418141   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:52.509] E0827 14:48:52.508565   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:52.609] apps.sh:271: Successful get hpa nginx-deployment {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0827 14:48:52.610] (Bhorizontalpodautoscaler.autoscaling "nginx-deployment" deleted
I0827 14:48:52.637] deployment.apps "nginx-deployment" deleted
W0827 14:48:52.737] E0827 14:48:52.632082   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:52.738] E0827 14:48:52.726653   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:52.838] apps.sh:279: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:48:52.911] (Bdeployment.apps/nginx created
W0827 14:48:53.012] I0827 14:48:52.915610   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx", UID:"525003e7-0e3c-430a-ba8c-5361dd03571e", APIVersion:"apps/v1", ResourceVersion:"2003", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0827 14:48:53.013] I0827 14:48:52.920008   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-bbbbb95b5", UID:"1bcbfbae-c9b0-4241-8822-7832b825df27", APIVersion:"apps/v1", ResourceVersion:"2004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-t89cb
W0827 14:48:53.013] I0827 14:48:52.926075   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-bbbbb95b5", UID:"1bcbfbae-c9b0-4241-8822-7832b825df27", APIVersion:"apps/v1", ResourceVersion:"2004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-tjqct
W0827 14:48:53.014] I0827 14:48:52.927458   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-bbbbb95b5", UID:"1bcbfbae-c9b0-4241-8822-7832b825df27", APIVersion:"apps/v1", ResourceVersion:"2004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-fvzz7
I0827 14:48:53.114] apps.sh:283: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0827 14:48:53.126] (Bapps.sh:284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:48:53.230] (Bdeployment.apps/nginx skipped rollback (current template already matches revision 1)
I0827 14:48:53.334] apps.sh:287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:48:53.504] (Bdeployment.apps/nginx configured
W0827 14:48:53.605] E0827 14:48:53.420877   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:53.605] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0827 14:48:53.606] I0827 14:48:53.510283   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx", UID:"525003e7-0e3c-430a-ba8c-5361dd03571e", APIVersion:"apps/v1", ResourceVersion:"2018", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-56b84d547f to 1
W0827 14:48:53.606] E0827 14:48:53.510953   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:53.607] I0827 14:48:53.515380   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-56b84d547f", UID:"60456469-a794-4025-953d-e83a887be927", APIVersion:"apps/v1", ResourceVersion:"2019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-56b84d547f-x9jf8
W0827 14:48:53.634] E0827 14:48:53.633707   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:53.729] E0827 14:48:53.728508   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:53.829] apps.sh:290: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0827 14:48:53.830] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0827 14:48:53.851] apps.sh:293: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0827 14:48:53.971] (Bdeployment.apps/nginx rolled back
W0827 14:48:54.423] E0827 14:48:54.422417   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:54.513] E0827 14:48:54.512364   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:54.635] E0827 14:48:54.635189   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:54.730] E0827 14:48:54.730123   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:55.072] apps.sh:297: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:48:55.273] (Bapps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:48:55.376] (Bdeployment.apps/nginx rolled back
W0827 14:48:55.477] error: unable to find specified revision 1000000 in history
W0827 14:48:55.477] E0827 14:48:55.424179   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:55.514] E0827 14:48:55.514052   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:55.637] E0827 14:48:55.636559   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:55.732] E0827 14:48:55.731698   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:56.426] E0827 14:48:56.425523   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:56.516] E0827 14:48:56.515447   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:56.616] apps.sh:304: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0827 14:48:56.617] (Bdeployment.apps/nginx paused
W0827 14:48:56.717] E0827 14:48:56.637922   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:56.718] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W0827 14:48:56.733] E0827 14:48:56.733041   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:56.759] error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
I0827 14:48:56.859] deployment.apps/nginx resumed
I0827 14:48:56.959] deployment.apps/nginx rolled back
I0827 14:48:57.145]     deployment.kubernetes.io/revision-history: 1,3
W0827 14:48:57.344] error: desired revision (3) is different from the running revision (5)
W0827 14:48:57.427] E0827 14:48:57.426698   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:57.450] I0827 14:48:57.449647   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx", UID:"525003e7-0e3c-430a-ba8c-5361dd03571e", APIVersion:"apps/v1", ResourceVersion:"2048", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-bbbbb95b5 to 2
W0827 14:48:57.456] I0827 14:48:57.455385   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-bbbbb95b5", UID:"1bcbfbae-c9b0-4241-8822-7832b825df27", APIVersion:"apps/v1", ResourceVersion:"2052", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-bbbbb95b5-tjqct
W0827 14:48:57.464] I0827 14:48:57.463574   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx", UID:"525003e7-0e3c-430a-ba8c-5361dd03571e", APIVersion:"apps/v1", ResourceVersion:"2051", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-698b5b5b74 to 1
W0827 14:48:57.469] I0827 14:48:57.468297   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-698b5b5b74", UID:"463489b2-f00d-421a-97ee-b192c70db9cf", APIVersion:"apps/v1", ResourceVersion:"2059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-698b5b5b74-bqjns
W0827 14:48:57.517] E0827 14:48:57.516838   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:57.618] deployment.apps/nginx restarted
W0827 14:48:57.719] E0827 14:48:57.639340   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:57.735] E0827 14:48:57.734444   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:58.428] E0827 14:48:58.428147   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:58.518] E0827 14:48:58.517930   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:48:58.629] Successful
I0827 14:48:58.629] message:apiVersion: apps/v1
I0827 14:48:58.629] kind: ReplicaSet
I0827 14:48:58.629] metadata:
I0827 14:48:58.629]   annotations:
I0827 14:48:58.630]     deployment.kubernetes.io/desired-replicas: "3"
... skipping 115 lines ...
I0827 14:48:58.647]       terminationGracePeriodSeconds: 30
I0827 14:48:58.647] status:
I0827 14:48:58.647]   fullyLabeledReplicas: 1
I0827 14:48:58.647]   observedGeneration: 2
I0827 14:48:58.647]   replicas: 1
I0827 14:48:58.647] has:deployment.kubernetes.io/revision: "6"
W0827 14:48:58.748] E0827 14:48:58.641687   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:58.749] E0827 14:48:58.735581   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:48:58.788] I0827 14:48:58.788182   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx2", UID:"f9883263-d14e-43a8-9c18-1c806a8a0bcb", APIVersion:"apps/v1", ResourceVersion:"2068", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-659456565 to 3
W0827 14:48:58.793] I0827 14:48:58.792700   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx2-659456565", UID:"e5b5cc18-b72a-4347-b9c8-8ae23b92aef0", APIVersion:"apps/v1", ResourceVersion:"2069", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-659456565-b646f
W0827 14:48:58.797] I0827 14:48:58.796549   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx2-659456565", UID:"e5b5cc18-b72a-4347-b9c8-8ae23b92aef0", APIVersion:"apps/v1", ResourceVersion:"2069", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-659456565-vfmzr
W0827 14:48:58.799] I0827 14:48:58.798464   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx2-659456565", UID:"e5b5cc18-b72a-4347-b9c8-8ae23b92aef0", APIVersion:"apps/v1", ResourceVersion:"2069", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-659456565-cdrfs
I0827 14:48:58.899] deployment.apps/nginx2 created
I0827 14:48:58.900] deployment.apps "nginx2" deleted
... skipping 23 lines ...
I0827 14:49:00.985] (Bapps.sh:361: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:49:01.160] (Bapps.sh:364: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:49:01.250] (Bapps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0827 14:49:01.333] (Bdeployment.apps "nginx-deployment" deleted
I0827 14:49:01.431] apps.sh:371: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:49:01.577] (Bdeployment.apps/nginx-deployment created
W0827 14:49:01.677] E0827 14:48:59.429446   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.678] E0827 14:48:59.519319   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.678] I0827 14:48:59.599108   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"3e5b13e5-1e08-422e-ac1b-7678472ea2d3", APIVersion:"apps/v1", ResourceVersion:"2118", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6ccccffbb4 to 1
W0827 14:49:01.678] I0827 14:48:59.603536   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-6ccccffbb4", UID:"d2415fc4-5985-4ff4-a1c0-415b94b8407c", APIVersion:"apps/v1", ResourceVersion:"2119", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6ccccffbb4-thw7d
W0827 14:49:01.679] E0827 14:48:59.642904   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.679] E0827 14:48:59.737126   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.679] error: unable to find container named "redis"
W0827 14:49:01.679] I0827 14:49:00.232897   52919 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1566917315-6333
W0827 14:49:01.680] E0827 14:49:00.430875   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.680] E0827 14:49:00.520662   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.680] E0827 14:49:00.644682   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.680] E0827 14:49:00.738257   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.681] I0827 14:49:00.812221   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"3e5b13e5-1e08-422e-ac1b-7678472ea2d3", APIVersion:"apps/v1", ResourceVersion:"2136", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6ccccffbb4 to 0
W0827 14:49:01.681] I0827 14:49:00.818258   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-6ccccffbb4", UID:"d2415fc4-5985-4ff4-a1c0-415b94b8407c", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6ccccffbb4-thw7d
W0827 14:49:01.681] I0827 14:49:00.824010   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"3e5b13e5-1e08-422e-ac1b-7678472ea2d3", APIVersion:"apps/v1", ResourceVersion:"2138", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6596c84b98 to 1
W0827 14:49:01.682] I0827 14:49:00.833452   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-6596c84b98", UID:"84f85340-ca69-4bd0-bda1-d9e3808f35b8", APIVersion:"apps/v1", ResourceVersion:"2146", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6596c84b98-cpxc6
W0827 14:49:01.682] E0827 14:49:01.432771   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.682] E0827 14:49:01.522487   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.682] I0827 14:49:01.581570   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"3127f157-fd65-4f30-9d2c-6cb6099b5942", APIVersion:"apps/v1", ResourceVersion:"2169", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-75547f8f9b to 3
W0827 14:49:01.683] I0827 14:49:01.587135   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-75547f8f9b", UID:"d6a0538f-8ee3-4f26-9857-82990e4da2eb", APIVersion:"apps/v1", ResourceVersion:"2170", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-75547f8f9b-zhgwf
W0827 14:49:01.683] I0827 14:49:01.592066   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-75547f8f9b", UID:"d6a0538f-8ee3-4f26-9857-82990e4da2eb", APIVersion:"apps/v1", ResourceVersion:"2170", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-75547f8f9b-5fvwg
W0827 14:49:01.683] I0827 14:49:01.593829   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-75547f8f9b", UID:"d6a0538f-8ee3-4f26-9857-82990e4da2eb", APIVersion:"apps/v1", ResourceVersion:"2170", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-75547f8f9b-8lvww
W0827 14:49:01.684] E0827 14:49:01.646348   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:01.739] E0827 14:49:01.739284   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:49:01.840] configmap/test-set-env-config created
I0827 14:49:01.901] secret/test-set-env-secret created
I0827 14:49:02.003] apps.sh:376: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
I0827 14:49:02.098] (Bapps.sh:378: Successful get configmaps/test-set-env-config {{.metadata.name}}: test-set-env-config
I0827 14:49:02.185] (Bapps.sh:379: Successful get secret {{range.items}}{{.metadata.name}}:{{end}}: test-set-env-secret:
I0827 14:49:02.285] (Bdeployment.apps/nginx-deployment env updated
W0827 14:49:02.386] I0827 14:49:02.290655   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"3127f157-fd65-4f30-9d2c-6cb6099b5942", APIVersion:"apps/v1", ResourceVersion:"2185", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-586b69f9d8 to 1
W0827 14:49:02.386] I0827 14:49:02.296169   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-586b69f9d8", UID:"cd312973-15a6-4186-a932-7558e7f41844", APIVersion:"apps/v1", ResourceVersion:"2186", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-586b69f9d8-gxk27
W0827 14:49:02.436] E0827 14:49:02.435667   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:02.524] E0827 14:49:02.524294   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:02.618] I0827 14:49:02.617493   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"3127f157-fd65-4f30-9d2c-6cb6099b5942", APIVersion:"apps/v1", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-75547f8f9b to 2
W0827 14:49:02.623] I0827 14:49:02.622592   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-75547f8f9b", UID:"d6a0538f-8ee3-4f26-9857-82990e4da2eb", APIVersion:"apps/v1", ResourceVersion:"2198", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-75547f8f9b-zhgwf
W0827 14:49:02.633] I0827 14:49:02.633081   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"3127f157-fd65-4f30-9d2c-6cb6099b5942", APIVersion:"apps/v1", ResourceVersion:"2197", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-f48fcfc54 to 1
W0827 14:49:02.638] I0827 14:49:02.638047   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-f48fcfc54", UID:"a045a2e1-5ec6-4a89-a67a-61692a981223", APIVersion:"apps/v1", ResourceVersion:"2201", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-f48fcfc54-cxh7t
W0827 14:49:02.648] E0827 14:49:02.647324   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:02.741] E0827 14:49:02.740715   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:49:02.842] apps.sh:383: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
I0827 14:49:02.842] (Bapps.sh:385: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
I0827 14:49:02.842] (Bdeployment.apps/nginx-deployment env updated
I0827 14:49:02.843] apps.sh:389: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2
I0827 14:49:02.843] (Bdeployment.apps/nginx-deployment env updated
W0827 14:49:02.943] I0827 14:49:02.844391   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"3127f157-fd65-4f30-9d2c-6cb6099b5942", APIVersion:"apps/v1", ResourceVersion:"2214", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-75547f8f9b to 1
... skipping 11 lines ...
W0827 14:49:03.310] I0827 14:49:03.309950   52919 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment", UID:"3127f157-fd65-4f30-9d2c-6cb6099b5942", APIVersion:"apps/v1", ResourceVersion:"2257", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5c45f7c99 to 1
W0827 14:49:03.330] I0827 14:49:03.323897   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917328-21661", Name:"nginx-deployment-5c45f7c99", UID:"dbae7752-d55a-44c3-9ecc-2ff4911c8ac0", APIVersion:"apps/v1", ResourceVersion:"2267", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5c45f7c99-mrblh
I0827 14:49:03.430] deployment.apps/nginx-deployment env updated
I0827 14:49:03.431] deployment.apps/nginx-deployment env updated
I0827 14:49:03.431] deployment.apps "nginx-deployment" deleted
I0827 14:49:03.523] configmap "test-set-env-config" deleted
W0827 14:49:03.629] E0827 14:49:03.437247   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:03.629] E0827 14:49:03.525266   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:03.649] E0827 14:49:03.649173   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:03.743] E0827 14:49:03.742955   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:49:03.844] secret "test-set-env-secret" deleted
I0827 14:49:03.844] +++ exit code: 0
I0827 14:49:03.844] Recording: run_rs_tests
I0827 14:49:03.844] Running command: run_rs_tests
I0827 14:49:03.844] 
I0827 14:49:03.844] +++ Running case: test-cmd.run_rs_tests 
... skipping 13 lines ...
I0827 14:49:04.720] apps.sh:527: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0827 14:49:04.722] (B+++ [0827 14:49:04] Deleting rs
I0827 14:49:04.807] replicaset.apps "frontend-no-cascade" deleted
W0827 14:49:04.908] I0827 14:49:04.160242   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917343-27164", Name:"frontend", UID:"03c767b4-61fc-4e26-9c5c-3f1a949571cf", APIVersion:"apps/v1", ResourceVersion:"2296", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8r67h
W0827 14:49:04.908] I0827 14:49:04.163875   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917343-27164", Name:"frontend", UID:"03c767b4-61fc-4e26-9c5c-3f1a949571cf", APIVersion:"apps/v1", ResourceVersion:"2296", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cz4w5
W0827 14:49:04.909] I0827 14:49:04.163986   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917343-27164", Name:"frontend", UID:"03c767b4-61fc-4e26-9c5c-3f1a949571cf", APIVersion:"apps/v1", ResourceVersion:"2296", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hklgg
W0827 14:49:04.909] E0827 14:49:04.438823   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:04.909] E0827 14:49:04.526639   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:04.909] I0827 14:49:04.622163   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917343-27164", Name:"frontend-no-cascade", UID:"663ca811-019a-4e96-b58f-bea379b4dddb", APIVersion:"apps/v1", ResourceVersion:"2312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-knzls
W0827 14:49:04.910] I0827 14:49:04.625906   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917343-27164", Name:"frontend-no-cascade", UID:"663ca811-019a-4e96-b58f-bea379b4dddb", APIVersion:"apps/v1", ResourceVersion:"2312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-6c6f4
W0827 14:49:04.910] I0827 14:49:04.629144   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917343-27164", Name:"frontend-no-cascade", UID:"663ca811-019a-4e96-b58f-bea379b4dddb", APIVersion:"apps/v1", ResourceVersion:"2312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-bth44
W0827 14:49:04.910] E0827 14:49:04.650456   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:04.910] E0827 14:49:04.744219   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:49:05.011] apps.sh:531: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:49:05.087] (Bapps.sh:533: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0827 14:49:05.174] (Bpod "frontend-no-cascade-6c6f4" deleted
I0827 14:49:05.179] pod "frontend-no-cascade-bth44" deleted
I0827 14:49:05.185] pod "frontend-no-cascade-knzls" deleted
I0827 14:49:05.278] apps.sh:536: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:49:05.364] (Bapps.sh:540: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0827 14:49:05.513] (Breplicaset.apps/frontend created
W0827 14:49:05.614] E0827 14:49:05.440557   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:05.615] I0827 14:49:05.518055   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917343-27164", Name:"frontend", UID:"81acae74-d0ca-4b04-b3f0-95a8c336893c", APIVersion:"apps/v1", ResourceVersion:"2337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jfsrt
W0827 14:49:05.616] I0827 14:49:05.521866   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917343-27164", Name:"frontend", UID:"81acae74-d0ca-4b04-b3f0-95a8c336893c", APIVersion:"apps/v1", ResourceVersion:"2337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cj4nn
W0827 14:49:05.616] I0827 14:49:05.522190   52919 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1566917343-27164", Name:"frontend", UID:"81acae74-d0ca-4b04-b3f0-95a8c336893c", APIVersion:"apps/v1", ResourceVersion:"2337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-w7bz6
W0827 14:49:05.617] E0827 14:49:05.527904   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:05.652] E0827 14:49:05.652136   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:05.746] E0827 14:49:05.745570   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0827 14:49:05.847] apps.sh:544: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0827 14:49:05.847] (Bapps.sh:546: Successful describe rs frontend:
I0827 14:49:05.848] Name:         frontend
I0827 14:49:05.848] Namespace:    namespace-1566917343-27164
I0827 14:49:05.849] Selector:     app=guestbook,tier=frontend
I0827 14:49:05.849] Labels:       app=guestbook
I0827 14:49:05.849]               tier=frontend
I0827 14:49:05.849] Annotations:  <none>
I0827 14:49:05.850] Replicas:     3 current / 3 desired
I0827 14:49:05.850] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:49:05.850] Pod Template:
I0827 14:49:05.851]   Labels:  app=guestbook
I0827 14:49:05.851]            tier=frontend
I0827 14:49:05.851]   Containers:
I0827 14:49:05.852]    php-redis:
I0827 14:49:05.852]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0827 14:49:05.878] Namespace:    namespace-1566917343-27164
I0827 14:49:05.879] Selector:     app=guestbook,tier=frontend
I0827 14:49:05.879] Labels:       app=guestbook
I0827 14:49:05.879]               tier=frontend
I0827 14:49:05.879] Annotations:  <none>
I0827 14:49:05.880] Replicas:     3 current / 3 desired
I0827 14:49:05.880] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:49:05.880] Pod Template:
I0827 14:49:05.880]   Labels:  app=guestbook
I0827 14:49:05.881]            tier=frontend
I0827 14:49:05.881]   Containers:
I0827 14:49:05.881]    php-redis:
I0827 14:49:05.881]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0827 14:49:05.989] Namespace:    namespace-1566917343-27164
I0827 14:49:05.990] Selector:     app=guestbook,tier=frontend
I0827 14:49:05.990] Labels:       app=guestbook
I0827 14:49:05.990]               tier=frontend
I0827 14:49:05.991] Annotations:  <none>
I0827 14:49:05.991] Replicas:     3 current / 3 desired
I0827 14:49:05.991] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:49:05.992] Pod Template:
I0827 14:49:05.992]   Labels:  app=guestbook
I0827 14:49:05.992]            tier=frontend
I0827 14:49:05.992]   Containers:
I0827 14:49:05.993]    php-redis:
I0827 14:49:05.993]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0827 14:49:06.102] Namespace:    namespace-1566917343-27164
I0827 14:49:06.102] Selector:     app=guestbook,tier=frontend
I0827 14:49:06.102] Labels:       app=guestbook
I0827 14:49:06.102]               tier=frontend
I0827 14:49:06.102] Annotations:  <none>
I0827 14:49:06.102] Replicas:     3 current / 3 desired
I0827 14:49:06.102] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:49:06.102] Pod Template:
I0827 14:49:06.103]   Labels:  app=guestbook
I0827 14:49:06.103]            tier=frontend
I0827 14:49:06.103]   Containers:
I0827 14:49:06.103]    php-redis:
I0827 14:49:06.103]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0827 14:49:06.236] Namespace:    namespace-1566917343-27164
I0827 14:49:06.237] Selector:     app=guestbook,tier=frontend
I0827 14:49:06.237] Labels:       app=guestbook
I0827 14:49:06.237]               tier=frontend
I0827 14:49:06.237] Annotations:  <none>
I0827 14:49:06.238] Replicas:     3 current / 3 desired
I0827 14:49:06.238] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:49:06.238] Pod Template:
I0827 14:49:06.238]   Labels:  app=guestbook
I0827 14:49:06.238]            tier=frontend
I0827 14:49:06.238]   Containers:
I0827 14:49:06.239]    php-redis:
I0827 14:49:06.239]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0827 14:49:06.337] Namespace:    namespace-1566917343-27164
I0827 14:49:06.337] Selector:     app=guestbook,tier=frontend
I0827 14:49:06.337] Labels:       app=guestbook
I0827 14:49:06.337]               tier=frontend
I0827 14:49:06.337] Annotations:  <none>
I0827 14:49:06.337] Replicas:     3 current / 3 desired
I0827 14:49:06.338] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:49:06.338] Pod Template:
I0827 14:49:06.338]   Labels:  app=guestbook
I0827 14:49:06.338]            tier=frontend
I0827 14:49:06.338]   Containers:
I0827 14:49:06.338]    php-redis:
I0827 14:49:06.338]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0827 14:49:06.437] Namespace:    namespace-1566917343-27164
I0827 14:49:06.438] Selector:     app=guestbook,tier=frontend
I0827 14:49:06.438] Labels:       app=guestbook
I0827 14:49:06.438]               tier=frontend
I0827 14:49:06.438] Annotations:  <none>
I0827 14:49:06.438] Replicas:     3 current / 3 desired
I0827 14:49:06.438] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:49:06.438] Pod Template:
I0827 14:49:06.438]   Labels:  app=guestbook
I0827 14:49:06.439]            tier=frontend
I0827 14:49:06.439]   Containers:
I0827 14:49:06.439]    php-redis:
I0827 14:49:06.439]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0827 14:49:06.549] Namespace:    namespace-1566917343-27164
I0827 14:49:06.549] Selector:     app=guestbook,tier=frontend
I0827 14:49:06.549] Labels:       app=guestbook
I0827 14:49:06.550]               tier=frontend
I0827 14:49:06.550] Annotations:  <none>
I0827 14:49:06.550] Replicas:     3 current / 3 desired
I0827 14:49:06.550] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0827 14:49:06.551] Pod Template:
I0827 14:49:06.551]   Labels:  app=guestbook
I0827 14:49:06.551]            tier=frontend
I0827 14:49:06.551]   Containers:
I0827 14:49:06.552]    php-redis:
I0827 14:49:06.552]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 99 lines ...
I0827 14:49:06.692] Tolerations:           <none>
I0827 14:49:06.692] Events:                <none>
I0827 14:49:06.761] (Bapps.sh:566: Successful get rs frontend {{.spec.replicas}}: 3
I0827 14:49:06.845] (Breplicaset.apps/frontend scaled
I0827 14:49:06.936] apps.sh:570: Successful get rs frontend {{.spec.replicas}}: 2
I0827 14:49:07.083] (Bdeployment.apps/scale-1 created
W0827 14:49:07.184] E0827 14:49:06.441968   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:07.185] E0827 14:49:06.529088   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0827 14:49:07.186] E0827 14:49:06.653420   52919 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to li