This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-10-17 16:11
Elapsed29m39s
Revision
Buildergke-prow-ssd-pool-1a225945-nk4t
Refs master:fef81925
82703:af8cb418
poda837a747-f0f8-11e9-989f-ca7475754d8d
infra-commitac4b4b51f
poda837a747-f0f8-11e9-989f-ca7475754d8d
repok8s.io/kubernetes
repo-commit2ca0f5075a1c65bd6593838cfe9565867a14d3ac
repos{u'k8s.io/kubernetes': u'master:fef819254a061e37c83edf894f43e33479ce1923,82703:af8cb4184376ee5b1f49c4636a86b417d80788e8'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.25s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W1017 16:37:56.216291  108271 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1017 16:37:56.216312  108271 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1017 16:37:56.216326  108271 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1017 16:37:56.216336  108271 master.go:261] Using reconciler: 
I1017 16:37:56.218218  108271 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.218422  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.218512  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.221715  108271 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1017 16:37:56.221796  108271 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1017 16:37:56.221817  108271 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.222211  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.222249  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.222978  108271 store.go:1342] Monitoring events count at <storage-prefix>//events
I1017 16:37:56.223021  108271 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1017 16:37:56.223033  108271 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.223309  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.223333  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.223966  108271 watch_cache.go:451] Replace watchCache (rev: 44147) 
I1017 16:37:56.224789  108271 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1017 16:37:56.224937  108271 watch_cache.go:451] Replace watchCache (rev: 44147) 
I1017 16:37:56.224906  108271 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.225059  108271 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1017 16:37:56.225176  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.225206  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.232282  108271 watch_cache.go:451] Replace watchCache (rev: 44147) 
I1017 16:37:56.233125  108271 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1017 16:37:56.233363  108271 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.233584  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.233612  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.233715  108271 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1017 16:37:56.235963  108271 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1017 16:37:56.236209  108271 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.236253  108271 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1017 16:37:56.236372  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.236395  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.235967  108271 watch_cache.go:451] Replace watchCache (rev: 44147) 
I1017 16:37:56.237148  108271 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1017 16:37:56.237292  108271 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.237398  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.237412  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.237473  108271 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1017 16:37:56.238051  108271 watch_cache.go:451] Replace watchCache (rev: 44147) 
I1017 16:37:56.239354  108271 watch_cache.go:451] Replace watchCache (rev: 44148) 
I1017 16:37:56.239762  108271 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1017 16:37:56.239870  108271 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1017 16:37:56.239950  108271 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.240654  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.240683  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.242775  108271 watch_cache.go:451] Replace watchCache (rev: 44148) 
I1017 16:37:56.242780  108271 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1017 16:37:56.242846  108271 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1017 16:37:56.242966  108271 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.243124  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.243148  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.244059  108271 watch_cache.go:451] Replace watchCache (rev: 44148) 
I1017 16:37:56.244415  108271 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1017 16:37:56.244467  108271 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1017 16:37:56.244625  108271 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.244793  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.244822  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.245702  108271 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1017 16:37:56.245746  108271 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1017 16:37:56.245778  108271 watch_cache.go:451] Replace watchCache (rev: 44148) 
I1017 16:37:56.245883  108271 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.246036  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.246059  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.246856  108271 watch_cache.go:451] Replace watchCache (rev: 44148) 
I1017 16:37:56.247086  108271 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1017 16:37:56.247204  108271 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1017 16:37:56.247258  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.247394  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.247411  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.248407  108271 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1017 16:37:56.248684  108271 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.248779  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.248794  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.248851  108271 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1017 16:37:56.249044  108271 watch_cache.go:451] Replace watchCache (rev: 44148) 
I1017 16:37:56.249867  108271 watch_cache.go:451] Replace watchCache (rev: 44148) 
I1017 16:37:56.250129  108271 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1017 16:37:56.250173  108271 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1017 16:37:56.250311  108271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.250436  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.250456  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.251146  108271 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1017 16:37:56.251205  108271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.251220  108271 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1017 16:37:56.251254  108271 watch_cache.go:451] Replace watchCache (rev: 44148) 
I1017 16:37:56.251332  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.251348  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.254264  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.254291  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.255766  108271 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.255935  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.255963  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.256880  108271 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1017 16:37:56.256906  108271 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1017 16:37:56.256919  108271 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1017 16:37:56.257324  108271 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.257552  108271 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.258325  108271 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.259013  108271 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.259770  108271 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.260491  108271 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.261773  108271 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.262666  108271 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.262992  108271 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.263571  108271 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.264269  108271 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.264626  108271 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.265475  108271 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.265904  108271 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.266520  108271 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.266779  108271 watch_cache.go:451] Replace watchCache (rev: 44149) 
I1017 16:37:56.266783  108271 watch_cache.go:451] Replace watchCache (rev: 44148) 
I1017 16:37:56.268630  108271 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.269980  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.270349  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.270723  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.271503  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.272089  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.272469  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.274788  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.275719  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.276265  108271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.278217  108271 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.279030  108271 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.281066  108271 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.281817  108271 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.284018  108271 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.285431  108271 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.287311  108271 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.289507  108271 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.296456  108271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.305880  108271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.306272  108271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.306415  108271 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1017 16:37:56.306444  108271 master.go:464] Enabling API group "authentication.k8s.io".
I1017 16:37:56.306462  108271 master.go:464] Enabling API group "authorization.k8s.io".
I1017 16:37:56.306678  108271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.306882  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.306919  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.308075  108271 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1017 16:37:56.308145  108271 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1017 16:37:56.308316  108271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.308453  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.308485  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.309291  108271 watch_cache.go:451] Replace watchCache (rev: 44155) 
I1017 16:37:56.309341  108271 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1017 16:37:56.309421  108271 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1017 16:37:56.309475  108271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.309590  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.309610  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.310369  108271 watch_cache.go:451] Replace watchCache (rev: 44155) 
I1017 16:37:56.310873  108271 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1017 16:37:56.310899  108271 master.go:464] Enabling API group "autoscaling".
I1017 16:37:56.311089  108271 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.311137  108271 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1017 16:37:56.311272  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.311292  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.312011  108271 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1017 16:37:56.312185  108271 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1017 16:37:56.312182  108271 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.312319  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.312337  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.314877  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.314955  108271 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1017 16:37:56.314974  108271 watch_cache.go:451] Replace watchCache (rev: 44155) 
I1017 16:37:56.314980  108271 master.go:464] Enabling API group "batch".
I1017 16:37:56.315090  108271 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1017 16:37:56.315176  108271 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.315324  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.315352  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.316115  108271 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1017 16:37:56.316148  108271 master.go:464] Enabling API group "certificates.k8s.io".
I1017 16:37:56.316307  108271 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.316417  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.316452  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.316471  108271 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1017 16:37:56.317722  108271 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1017 16:37:56.317827  108271 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1017 16:37:56.317919  108271 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.318412  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.318602  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.318605  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.318638  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.319507  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.322775  108271 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1017 16:37:56.322805  108271 master.go:464] Enabling API group "coordination.k8s.io".
I1017 16:37:56.322823  108271 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1017 16:37:56.322826  108271 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1017 16:37:56.323023  108271 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.323173  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.323194  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.324311  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.324696  108271 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1017 16:37:56.324722  108271 master.go:464] Enabling API group "extensions".
I1017 16:37:56.324922  108271 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.324958  108271 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1017 16:37:56.325064  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.325091  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.326318  108271 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1017 16:37:56.326544  108271 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.326705  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.326726  108271 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1017 16:37:56.326730  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.328203  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.329067  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.329458  108271 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1017 16:37:56.329487  108271 master.go:464] Enabling API group "networking.k8s.io".
I1017 16:37:56.329581  108271 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1017 16:37:56.329591  108271 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.329790  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.329811  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.330231  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.335647  108271 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1017 16:37:56.335680  108271 master.go:464] Enabling API group "node.k8s.io".
I1017 16:37:56.335727  108271 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1017 16:37:56.335886  108271 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.336483  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.336489  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.336516  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.337873  108271 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1017 16:37:56.337987  108271 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1017 16:37:56.338054  108271 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.338201  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.338217  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.339097  108271 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1017 16:37:56.339122  108271 master.go:464] Enabling API group "policy".
I1017 16:37:56.339226  108271 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.339272  108271 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1017 16:37:56.339383  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.339415  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.339437  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.340410  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.340426  108271 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1017 16:37:56.340506  108271 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1017 16:37:56.340702  108271 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.340805  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.340825  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.341235  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.342270  108271 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1017 16:37:56.342335  108271 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.342365  108271 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1017 16:37:56.342447  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.342472  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.343232  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.343892  108271 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1017 16:37:56.343922  108271 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1017 16:37:56.344083  108271 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.344233  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.344252  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.344736  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.345043  108271 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1017 16:37:56.345116  108271 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.345188  108271 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1017 16:37:56.345275  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.345290  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.346086  108271 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1017 16:37:56.346229  108271 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.346341  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.346359  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.346385  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.346414  108271 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1017 16:37:56.347752  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.348009  108271 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1017 16:37:56.348054  108271 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.348152  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.348166  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.348236  108271 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1017 16:37:56.350072  108271 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1017 16:37:56.350221  108271 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.350373  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.350388  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.350449  108271 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1017 16:37:56.350694  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.350993  108271 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1017 16:37:56.351018  108271 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1017 16:37:56.351087  108271 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1017 16:37:56.351388  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.352213  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.352679  108271 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.352829  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.352844  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.353513  108271 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1017 16:37:56.353652  108271 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1017 16:37:56.353841  108271 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.353963  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.353977  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.354580  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.355273  108271 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1017 16:37:56.355304  108271 master.go:464] Enabling API group "scheduling.k8s.io".
I1017 16:37:56.355374  108271 master.go:453] Skipping disabled API group "settings.k8s.io".
I1017 16:37:56.355511  108271 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.355723  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.355740  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.355797  108271 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1017 16:37:56.356720  108271 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1017 16:37:56.356839  108271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.356927  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.356940  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.356992  108271 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1017 16:37:56.358484  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.358751  108271 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1017 16:37:56.358796  108271 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.358877  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.358888  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.358939  108271 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1017 16:37:56.360421  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.360498  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.360614  108271 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1017 16:37:56.360688  108271 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.360783  108271 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1017 16:37:56.360808  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.360826  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.362665  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.364392  108271 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1017 16:37:56.364581  108271 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.364742  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.364758  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.364814  108271 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1017 16:37:56.365892  108271 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1017 16:37:56.366058  108271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.366116  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.366196  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.366220  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.366335  108271 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1017 16:37:56.367428  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.367820  108271 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1017 16:37:56.367842  108271 master.go:464] Enabling API group "storage.k8s.io".
I1017 16:37:56.368008  108271 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.368119  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.368138  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.368199  108271 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1017 16:37:56.369621  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.369810  108271 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1017 16:37:56.369899  108271 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1017 16:37:56.369990  108271 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.370139  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.370172  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.371732  108271 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1017 16:37:56.371795  108271 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1017 16:37:56.371934  108271 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.371970  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.372045  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.372060  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.372306  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.373289  108271 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1017 16:37:56.373460  108271 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.373628  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.373662  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.373747  108271 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1017 16:37:56.374969  108271 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1017 16:37:56.375026  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.375160  108271 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.375273  108271 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1017 16:37:56.375326  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.375344  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.376324  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.376506  108271 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1017 16:37:56.376524  108271 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1017 16:37:56.376529  108271 master.go:464] Enabling API group "apps".
I1017 16:37:56.376712  108271 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.377388  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.377408  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.378074  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.379105  108271 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1017 16:37:56.379145  108271 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.379247  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.379258  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.379305  108271 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1017 16:37:56.382983  108271 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1017 16:37:56.383033  108271 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.383141  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.383154  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.383226  108271 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1017 16:37:56.386243  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.386701  108271 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1017 16:37:56.386785  108271 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.386918  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.386934  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.386997  108271 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1017 16:37:56.388737  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.388865  108271 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1017 16:37:56.388887  108271 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1017 16:37:56.388940  108271 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.389110  108271 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1017 16:37:56.389187  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:56.389203  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:56.390152  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.390819  108271 store.go:1342] Monitoring events count at <storage-prefix>//events
I1017 16:37:56.390838  108271 master.go:464] Enabling API group "events.k8s.io".
I1017 16:37:56.391072  108271 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.391226  108271 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.391401  108271 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.391480  108271 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.391614  108271 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.391695  108271 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.391824  108271 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.391892  108271 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.391958  108271 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.392065  108271 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.392927  108271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.393117  108271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.393724  108271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.393892  108271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.394407  108271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.395144  108271 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1017 16:37:56.396220  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.396869  108271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.397712  108271 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.398064  108271 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.398861  108271 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.399351  108271 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 16:37:56.399557  108271 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1017 16:37:56.400153  108271 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.400509  108271 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.400834  108271 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.401593  108271 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.402435  108271 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.404229  108271 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.404776  108271 watch_cache.go:451] Replace watchCache (rev: 44156) 
I1017 16:37:56.406637  108271 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.407316  108271 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.408309  108271 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.408624  108271 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.409407  108271 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 16:37:56.409738  108271 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1017 16:37:56.410394  108271 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.410625  108271 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.411044  108271 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.411794  108271 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.412328  108271 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.412944  108271 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.413641  108271 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.427062  108271 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.431664  108271 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.433026  108271 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.435117  108271 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 16:37:56.435434  108271 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1017 16:37:56.436507  108271 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.437371  108271 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 16:37:56.437440  108271 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1017 16:37:56.438057  108271 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.438655  108271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.438923  108271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.439778  108271 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.440354  108271 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.440831  108271 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.441569  108271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 16:37:56.441649  108271 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1017 16:37:56.442761  108271 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.443500  108271 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.443805  108271 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.444413  108271 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.444721  108271 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.445177  108271 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.446144  108271 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.446391  108271 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.446711  108271 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.447620  108271 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.447868  108271 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.448172  108271 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 16:37:56.448376  108271 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1017 16:37:56.448388  108271 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1017 16:37:56.449662  108271 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.450845  108271 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.452260  108271 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.453065  108271 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.453769  108271 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"f349d0d6-9d5f-421a-86f1-48b0ed79f167", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 16:37:56.458350  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.458377  108271 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1017 16:37:56.458388  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.458398  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.458407  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.458414  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.458442  108271 httplog.go:90] GET /healthz: (201.377µs) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:56.458735  108271 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.326926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52152]
I1017 16:37:56.462277  108271 httplog.go:90] GET /api/v1/services: (2.058226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:56.467134  108271 httplog.go:90] GET /api/v1/services: (1.304469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:56.470590  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.470617  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.470627  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.470636  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.470643  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.470669  108271 httplog.go:90] GET /healthz: (230.808µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:56.471765  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.250095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52152]
I1017 16:37:56.473838  108271 httplog.go:90] GET /api/v1/services: (1.938174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:56.474937  108271 httplog.go:90] POST /api/v1/namespaces: (2.819989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52152]
I1017 16:37:56.475162  108271 httplog.go:90] GET /api/v1/services: (1.236528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:56.476549  108271 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.254289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52152]
I1017 16:37:56.478631  108271 httplog.go:90] POST /api/v1/namespaces: (1.558061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:56.480030  108271 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (867.29µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:56.482040  108271 httplog.go:90] POST /api/v1/namespaces: (1.601485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:56.560289  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.560316  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.560325  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.560331  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.560352  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.560399  108271 httplog.go:90] GET /healthz: (270.108µs) 0 [Go-http-client/1.1 127.0.0.1:52156]
I1017 16:37:56.571484  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.571514  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.571525  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.571660  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.571704  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.571755  108271 httplog.go:90] GET /healthz: (439.128µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:56.660134  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.660172  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.660185  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.660193  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.660201  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.660230  108271 httplog.go:90] GET /healthz: (249.094µs) 0 [Go-http-client/1.1 127.0.0.1:52156]
I1017 16:37:56.671240  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.671270  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.671279  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.671287  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.671308  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.671335  108271 httplog.go:90] GET /healthz: (228.313µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:56.760250  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.760293  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.760307  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.760318  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.760329  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.760391  108271 httplog.go:90] GET /healthz: (319.577µs) 0 [Go-http-client/1.1 127.0.0.1:52156]
I1017 16:37:56.771467  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.771512  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.771524  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.771623  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.771640  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.771690  108271 httplog.go:90] GET /healthz: (392.223µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:56.861763  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.861801  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.861813  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.861822  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.861830  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.861858  108271 httplog.go:90] GET /healthz: (234.717µs) 0 [Go-http-client/1.1 127.0.0.1:52156]
I1017 16:37:56.871340  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.871375  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.871384  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.871390  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.871396  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.871431  108271 httplog.go:90] GET /healthz: (217.275µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:56.959977  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.960012  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.960024  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.960033  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.960038  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.960062  108271 httplog.go:90] GET /healthz: (215.517µs) 0 [Go-http-client/1.1 127.0.0.1:52156]
I1017 16:37:56.971405  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:56.971448  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:56.971461  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:56.971471  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:56.971479  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:56.971511  108271 httplog.go:90] GET /healthz: (263.83µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:57.060095  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:57.060135  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.060148  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:57.060157  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:57.060166  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:57.060215  108271 httplog.go:90] GET /healthz: (274.398µs) 0 [Go-http-client/1.1 127.0.0.1:52156]
I1017 16:37:57.071377  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:57.071416  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.071429  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:57.071439  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:57.071448  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:57.071482  108271 httplog.go:90] GET /healthz: (274.395µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:57.159945  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:57.159978  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.159991  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:57.160001  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:57.160011  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:57.160044  108271 httplog.go:90] GET /healthz: (246.474µs) 0 [Go-http-client/1.1 127.0.0.1:52156]
I1017 16:37:57.171383  108271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 16:37:57.171417  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.171432  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:57.171442  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:57.171450  108271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:57.171506  108271 httplog.go:90] GET /healthz: (321.349µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:57.216826  108271 client.go:357] parsed scheme: "endpoint"
I1017 16:37:57.216912  108271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 16:37:57.262452  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.262480  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:57.262490  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:57.262499  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:57.262678  108271 httplog.go:90] GET /healthz: (2.772468ms) 0 [Go-http-client/1.1 127.0.0.1:52156]
I1017 16:37:57.272503  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.272547  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:57.272560  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:57.272568  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:57.272607  108271 httplog.go:90] GET /healthz: (1.303746ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:57.361347  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.361380  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:57.361391  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:57.361399  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:57.361456  108271 httplog.go:90] GET /healthz: (1.480874ms) 0 [Go-http-client/1.1 127.0.0.1:52156]
I1017 16:37:57.372337  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.372363  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:57.372371  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:57.372377  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:57.372408  108271 httplog.go:90] GET /healthz: (918.244µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:57.458528  108271 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.311743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:57.458773  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.062143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.460759  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.460783  108271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 16:37:57.460793  108271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 16:37:57.460800  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 16:37:57.460836  108271 httplog.go:90] GET /healthz: (843.392µs) 0 [Go-http-client/1.1 127.0.0.1:54086]
I1017 16:37:57.460845  108271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.735877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.460961  108271 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.800196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I1017 16:37:57.461084  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.265555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.461103  108271 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1017 16:37:57.462439  108271 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.072853ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.462463  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (945.802µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.462471  108271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.343118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54086]
I1017 16:37:57.463552  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (727.558µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54086]
I1017 16:37:57.464129  108271 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.247673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.464359  108271 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1017 16:37:57.464392  108271 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1017 16:37:57.464706  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (764.617µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54086]
I1017 16:37:57.465734  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (670.747µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.466672  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (661.119µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.467525  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (612.308µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.468770  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (854.245µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.470047  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (647.599µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.471459  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (984.688µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.471769  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.471800  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.471827  108271 httplog.go:90] GET /healthz: (749.611µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.473366  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.534877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.473665  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1017 16:37:57.474975  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (884.375µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.477185  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.529737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.477352  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1017 16:37:57.479165  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.31643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
E1017 16:37:57.479965  108271 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:43869/apis/events.k8s.io/v1beta1/namespaces/permit-plugine80703eb-d754-47ac-aeb7-6496a4e5f9b2/events: dial tcp 127.0.0.1:43869: connect: connection refused' (may retry after sleeping)
I1017 16:37:57.480782  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.268685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.480972  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1017 16:37:57.482132  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (847.448µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.484090  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.431434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.484244  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1017 16:37:57.485482  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.011572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.487322  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.443359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.487477  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1017 16:37:57.488879  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.036553ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.490910  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.499321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.491213  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1017 16:37:57.493189  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.785166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.497309  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.729625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.497559  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1017 16:37:57.499826  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.068241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.504988  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.751755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.507234  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1017 16:37:57.508767  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.319129ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.511956  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.790254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.512199  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1017 16:37:57.513354  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (856.484µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.515846  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.047024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.516089  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1017 16:37:57.517406  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (793.105µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.519223  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.344191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.519422  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1017 16:37:57.522989  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (3.181628ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.528402  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.96074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.528826  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1017 16:37:57.531454  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (994.738µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.533727  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.739317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.533959  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1017 16:37:57.535057  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (928.246µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.537263  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.762047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.537471  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1017 16:37:57.541153  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (2.602838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.546130  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.389496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.546727  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1017 16:37:57.548721  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.783824ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.551259  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.117678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.551480  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1017 16:37:57.553074  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.41325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.556274  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.80585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.556635  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1017 16:37:57.557810  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (852.112µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.558435  108271 cacher.go:785] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I1017 16:37:57.558462  108271 cacher.go:785] cacher (*rbac.ClusterRole): 2 objects queued in incoming channel.
I1017 16:37:57.558480  108271 cacher.go:785] cacher (*rbac.ClusterRole): 3 objects queued in incoming channel.
I1017 16:37:57.558494  108271 cacher.go:785] cacher (*rbac.ClusterRole): 4 objects queued in incoming channel.
I1017 16:37:57.560285  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.016105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.560915  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1017 16:37:57.562730  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.592279ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.566173  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.879982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.567287  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1017 16:37:57.568023  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.568043  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.568071  108271 httplog.go:90] GET /healthz: (1.153499ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:57.568260  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (683.055µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.573648  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.820358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.574071  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.574100  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.574143  108271 httplog.go:90] GET /healthz: (2.72123ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.574431  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1017 16:37:57.576365  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.600518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.579676  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.508755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.579968  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1017 16:37:57.581285  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.037598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.584883  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.043226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.585085  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1017 16:37:57.586268  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (913.822µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.588701  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.762678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.589116  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1017 16:37:57.590840  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.467745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.593764  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.785634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.593949  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1017 16:37:57.595203  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.056283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.597872  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.276723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.598619  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1017 16:37:57.600924  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.88854ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.603830  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.296613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.604153  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1017 16:37:57.605150  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (813.891µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.607369  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.862677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.607714  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1017 16:37:57.608871  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (885.153µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.611159  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.676273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.611977  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1017 16:37:57.613572  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.254813ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.616921  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.701506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.617516  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1017 16:37:57.622149  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (3.988771ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.631749  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.896024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.632020  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1017 16:37:57.635035  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (2.204603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.642205  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.091257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.642800  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1017 16:37:57.645283  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (2.091674ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.650850  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.050412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.651759  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1017 16:37:57.653155  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.192341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.656494  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.526497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.656865  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1017 16:37:57.657959  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (892.999µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.660902  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.661049  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.661113  108271 httplog.go:90] GET /healthz: (1.255995ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:57.667880  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.373499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.668454  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1017 16:37:57.671341  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (2.446376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.672410  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.672436  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.672466  108271 httplog.go:90] GET /healthz: (1.269303ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.676566  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.682113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.676803  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1017 16:37:57.678275  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.185191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.680594  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.654064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.681288  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1017 16:37:57.684263  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (2.574608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.687717  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.430557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.687918  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1017 16:37:57.689510  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.401158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.693813  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.283652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.694415  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1017 16:37:57.696265  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.505553ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.698712  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.980771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.698976  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1017 16:37:57.700472  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.297987ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.703291  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.311603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.703711  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1017 16:37:57.705176  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.225202ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.707876  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.021034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.708152  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1017 16:37:57.709423  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.026179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.711745  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.797645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.711983  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1017 16:37:57.713344  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.126852ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.715603  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.602931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.715888  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1017 16:37:57.717406  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.237454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.719550  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.613994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.719783  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1017 16:37:57.722487  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (2.515641ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.733255  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.940914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.737061  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1017 16:37:57.738593  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.097689ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.741383  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.273739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.743137  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1017 16:37:57.744853  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.188218ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.747319  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.916321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.747680  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1017 16:37:57.750168  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (2.181718ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.753276  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.235349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.753937  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1017 16:37:57.755474  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.000115ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.758515  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.048861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.759101  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1017 16:37:57.760956  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.465654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.761085  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.761554  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.761828  108271 httplog.go:90] GET /healthz: (1.988001ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:57.768719  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.240459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.769217  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1017 16:37:57.771416  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.77078ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:57.772663  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.772708  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.772764  108271 httplog.go:90] GET /healthz: (1.003558ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.775874  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.862642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.776087  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1017 16:37:57.777508  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.206312ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.779869  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.850916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.780258  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1017 16:37:57.782675  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.319344ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.786428  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.02796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.786800  108271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1017 16:37:57.788187  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.192088ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.800731  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.674792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.801065  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1017 16:37:57.822111  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (4.009167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.840843  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.570181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.841188  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1017 16:37:57.860968  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.719287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.861001  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.861169  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.861210  108271 httplog.go:90] GET /healthz: (1.232797ms) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:57.874288  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.874319  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.874369  108271 httplog.go:90] GET /healthz: (1.567438ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.880077  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.967418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.880753  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1017 16:37:57.899803  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.633071ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.922037  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.819649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.922567  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1017 16:37:57.939728  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.323605ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.960775  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.960840  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.960848  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.701843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.960864  108271 httplog.go:90] GET /healthz: (924.386µs) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:57.961064  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1017 16:37:57.972896  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:57.972928  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:57.973001  108271 httplog.go:90] GET /healthz: (1.574428ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:57.979488  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.385779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.001797  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.363215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.002090  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1017 16:37:58.026203  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (8.037915ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.040799  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.667193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.041024  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1017 16:37:58.060153  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.001828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.060834  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.060870  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.060918  108271 httplog.go:90] GET /healthz: (1.058819ms) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:58.072560  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.072589  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.072641  108271 httplog.go:90] GET /healthz: (1.348765ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.081096  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.019887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.081354  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1017 16:37:58.099070  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (991.399µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.125282  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.295085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.125786  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1017 16:37:58.139520  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.254049ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.160325  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.223024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.160624  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1017 16:37:58.160990  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.161024  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.161072  108271 httplog.go:90] GET /healthz: (1.072024ms) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:58.174804  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.174839  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.174883  108271 httplog.go:90] GET /healthz: (1.529013ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.179831  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.72751ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.201048  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.747954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.201771  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1017 16:37:58.229999  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (11.839498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.240916  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.701489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.241183  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1017 16:37:58.259888  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.674793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.261085  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.261108  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.261142  108271 httplog.go:90] GET /healthz: (1.360047ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:58.272729  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.272762  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.272809  108271 httplog.go:90] GET /healthz: (1.606906ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.280774  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.503667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.281114  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1017 16:37:58.299847  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.584944ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.320431  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.267675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.320740  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1017 16:37:58.339742  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.392074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.361268  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.876654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.361495  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1017 16:37:58.362109  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.362159  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.362193  108271 httplog.go:90] GET /healthz: (1.547712ms) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:58.375380  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.375414  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.375473  108271 httplog.go:90] GET /healthz: (4.238393ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.382099  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (4.0159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.400451  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.044604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.400679  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1017 16:37:58.420938  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (2.806818ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.440504  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.325417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.440869  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1017 16:37:58.459470  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.337149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.460862  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.460899  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.460937  108271 httplog.go:90] GET /healthz: (1.010261ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:58.474322  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.474361  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.474418  108271 httplog.go:90] GET /healthz: (1.08422ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.481620  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.725754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.481941  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1017 16:37:58.503416  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (4.353289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.522018  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.834637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.522329  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1017 16:37:58.540393  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.131814ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.560167  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.974361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.560486  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1017 16:37:58.560709  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.560746  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.560778  108271 httplog.go:90] GET /healthz: (957.245µs) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:58.572639  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.572676  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.572727  108271 httplog.go:90] GET /healthz: (1.438196ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.579427  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.312835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.601025  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.812574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.601329  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1017 16:37:58.619481  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.305117ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.641100  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.740663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.641304  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1017 16:37:58.659207  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.123704ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.660579  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.660606  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.660656  108271 httplog.go:90] GET /healthz: (863.005µs) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:58.672500  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.672549  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.672600  108271 httplog.go:90] GET /healthz: (1.332485ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.681056  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.933331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.681334  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1017 16:37:58.702032  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.489731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.720368  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.120611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.720621  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1017 16:37:58.740524  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (2.436657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.760401  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.209454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:58.760805  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1017 16:37:58.761255  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.761284  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.761330  108271 httplog.go:90] GET /healthz: (1.259347ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:58.772826  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.772859  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.772906  108271 httplog.go:90] GET /healthz: (1.634939ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.780389  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.323219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.802421  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.421291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.802683  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1017 16:37:58.819439  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.276521ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.840251  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.095567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.840692  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1017 16:37:58.860389  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.449161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.862139  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.862166  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.862195  108271 httplog.go:90] GET /healthz: (968.007µs) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:58.872816  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.872843  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.872898  108271 httplog.go:90] GET /healthz: (1.103549ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.880922  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.568724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.881182  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1017 16:37:58.900278  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.41335ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.922021  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.712622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.922333  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1017 16:37:58.939165  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.097874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
E1017 16:37:58.946009  108271 factory.go:687] Error getting pod permit-plugine80703eb-d754-47ac-aeb7-6496a4e5f9b2/signalling-pod for retry: Get http://127.0.0.1:43869/api/v1/namespaces/permit-plugine80703eb-d754-47ac-aeb7-6496a4e5f9b2/pods/signalling-pod: dial tcp 127.0.0.1:43869: connect: connection refused; retrying...
I1017 16:37:58.960162  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.008394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.960353  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1017 16:37:58.961314  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.961339  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.961366  108271 httplog.go:90] GET /healthz: (863.943µs) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:58.972084  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:58.972129  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:58.972168  108271 httplog.go:90] GET /healthz: (959.442µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:58.979453  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.185159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.001176  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.986741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.001440  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1017 16:37:59.023141  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (4.938259ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.040800  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.628191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.041220  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1017 16:37:59.059634  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.468678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.060777  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.060802  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.060830  108271 httplog.go:90] GET /healthz: (911.413µs) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:59.072129  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.072156  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.072203  108271 httplog.go:90] GET /healthz: (1.003321ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.083212  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.083139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.083521  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1017 16:37:59.099667  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.521494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.123031  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.80439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.123380  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1017 16:37:59.141167  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (2.863918ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.160678  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.337042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.160978  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1017 16:37:59.161153  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.161180  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.161211  108271 httplog.go:90] GET /healthz: (1.102769ms) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:59.172116  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.172162  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.172211  108271 httplog.go:90] GET /healthz: (995.883µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.179997  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.311539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.202138  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.508298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.203007  108271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1017 16:37:59.222741  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (4.590842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.224717  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.412466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.241098  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.917128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.241808  108271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1017 16:37:59.261169  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.912583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.262332  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.262360  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.262395  108271 httplog.go:90] GET /healthz: (2.348102ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:59.266977  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.112305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.272233  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.272342  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.272454  108271 httplog.go:90] GET /healthz: (1.204127ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.280954  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.913313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.281348  108271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1017 16:37:59.299564  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.378882ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.301752  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.429979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.326175  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (7.828788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.326663  108271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1017 16:37:59.339645  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.455543ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.341463  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.29759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.360470  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.282501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.361566  108271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1017 16:37:59.363057  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.363078  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.363111  108271 httplog.go:90] GET /healthz: (3.224683ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:59.372247  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.372277  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.372306  108271 httplog.go:90] GET /healthz: (919.106µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.379384  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.191485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.383134  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.95225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.404310  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.395856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.404582  108271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1017 16:37:59.419818  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.241228ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.427939  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.016203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.442161  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.926869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.442480  108271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1017 16:37:59.459504  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.376696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.460816  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.460841  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.460878  108271 httplog.go:90] GET /healthz: (996.021µs) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:59.462122  108271 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.992507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.472392  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.472425  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.472467  108271 httplog.go:90] GET /healthz: (1.273815ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.482139  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.733137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.484213  108271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1017 16:37:59.499566  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.381364ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.501868  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.791993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.520598  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.432752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.520885  108271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1017 16:37:59.541638  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (3.512045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.543743  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.345443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.560590  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.392527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I1017 16:37:59.560972  108271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1017 16:37:59.562278  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.562306  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.562347  108271 httplog.go:90] GET /healthz: (1.136783ms) 0 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:59.575284  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.575336  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.575380  108271 httplog.go:90] GET /healthz: (1.95922ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.581399  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.569781ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.583715  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.7249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.600775  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.589727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.601072  108271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1017 16:37:59.619619  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.528289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.622863  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.771455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.640884  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.639346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.641207  108271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1017 16:37:59.660202  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.999157ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.660771  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.660801  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.660911  108271 httplog.go:90] GET /healthz: (1.157932ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:59.662737  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.740927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.672281  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.672314  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.672355  108271 httplog.go:90] GET /healthz: (1.138552ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.680513  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.246066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.680959  108271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1017 16:37:59.701267  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (3.032302ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.703901  108271 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.030284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.723480  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.317371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.723771  108271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1017 16:37:59.739598  108271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.478872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.741824  108271 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.617587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.761700  108271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 16:37:59.761733  108271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 16:37:59.761771  108271 httplog.go:90] GET /healthz: (1.709518ms) 0 [Go-http-client/1.1 127.0.0.1:54076]
I1017 16:37:59.761990  108271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.178895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.762183  108271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1017 16:37:59.774558  108271 httplog.go:90] GET /healthz: (2.066917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.776677  108271 httplog.go:90] GET /api/v1/namespaces/default: (1.41839ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.779608  108271 httplog.go:90] POST /api/v1/namespaces: (2.249669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.781879  108271 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.587228ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.785940  108271 httplog.go:90] POST /api/v1/namespaces/default/services: (3.690669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.787426  108271 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.079502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.788848  108271 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (780.825µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
E1017 16:37:59.789109  108271 controller.go:227] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I1017 16:37:59.861209  108271 httplog.go:90] GET /healthz: (1.160538ms) 200 [Go-http-client/1.1 127.0.0.1:52154]
I1017 16:37:59.865378  108271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (3.028432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
W1017 16:37:59.865882  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.866136  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.866158  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.866208  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.866417  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.866974  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.867071  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.867211  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.867329  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.867457  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 16:37:59.867775  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 16:37:59.869817  108271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.71584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.870162  108271 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1017 16:37:59.870196  108271 factory.go:308] Registering predicate: PredicateOne
I1017 16:37:59.870206  108271 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1017 16:37:59.870213  108271 factory.go:308] Registering predicate: PredicateTwo
I1017 16:37:59.870219  108271 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1017 16:37:59.870226  108271 factory.go:323] Registering priority: PriorityOne
I1017 16:37:59.870234  108271 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1017 16:37:59.870246  108271 factory.go:323] Registering priority: PriorityTwo
I1017 16:37:59.870251  108271 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1017 16:37:59.870260  108271 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1017 16:37:59.876581  108271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (5.334218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
W1017 16:37:59.876997  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 16:37:59.878318  108271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (1.027457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.878911  108271 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 16:37:59.878951  108271 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1017 16:37:59.878965  108271 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1017 16:37:59.878972  108271 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1017 16:37:59.882679  108271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.806869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
W1017 16:37:59.883208  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 16:37:59.884954  108271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (1.222383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.885212  108271 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 16:37:59.885235  108271 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1017 16:37:59.887464  108271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.690419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
W1017 16:37:59.887744  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 16:37:59.889486  108271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (1.474612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.890192  108271 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1017 16:37:59.890236  108271 factory.go:308] Registering predicate: PredicateOne
I1017 16:37:59.890246  108271 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1017 16:37:59.890254  108271 factory.go:308] Registering predicate: PredicateTwo
I1017 16:37:59.890259  108271 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1017 16:37:59.890266  108271 factory.go:323] Registering priority: PriorityOne
I1017 16:37:59.890274  108271 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1017 16:37:59.890288  108271 factory.go:323] Registering priority: PriorityTwo
I1017 16:37:59.890367  108271 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1017 16:37:59.890381  108271 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1017 16:37:59.894067  108271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.115157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
W1017 16:37:59.894299  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 16:37:59.896301  108271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (1.553713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:37:59.896755  108271 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 16:37:59.896798  108271 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1017 16:37:59.896810  108271 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1017 16:37:59.896819  108271 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1017 16:38:00.062644  108271 request.go:538] Throttling request took 165.083732ms, request: POST:http://127.0.0.1:34215/api/v1/namespaces/kube-system/configmaps
I1017 16:38:00.065808  108271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.924354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
W1017 16:38:00.066137  108271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 16:38:00.262323  108271 request.go:538] Throttling request took 195.972792ms, request: GET:http://127.0.0.1:34215/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I1017 16:38:00.266337  108271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (3.430004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:38:00.266830  108271 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 16:38:00.266868  108271 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1017 16:38:00.462235  108271 request.go:538] Throttling request took 194.961588ms, request: DELETE:http://127.0.0.1:34215/api/v1/nodes
I1017 16:38:00.464098  108271 httplog.go:90] DELETE /api/v1/nodes: (1.611554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
I1017 16:38:00.464287  108271 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1017 16:38:00.465814  108271 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.310873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52154]
--- FAIL: TestSchedulerCreationFromConfigMap (4.25s)
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191017-162757.xml

Find permit-plugine80703eb-d754-47ac-aeb7-6496a4e5f9b2/signalling-pod mentions in log files | View test history on testgrid


Show 2898 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 613 lines ...
I1017 16:22:42.574] +++ [1017 16:22:42] Testing kubectl version
W1017 16:22:42.675] I1017 16:22:42.587774   52942 garbagecollector.go:130] Starting garbage collector controller
W1017 16:22:42.675] I1017 16:22:42.587805   52942 controllermanager.go:534] Started "garbagecollector"
W1017 16:22:42.676] W1017 16:22:42.587829   52942 controllermanager.go:513] "bootstrapsigner" is disabled
W1017 16:22:42.676] I1017 16:22:42.587838   52942 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1017 16:22:42.676] I1017 16:22:42.587883   52942 graph_builder.go:282] GraphBuilder running
W1017 16:22:42.677] E1017 16:22:42.588713   52942 core.go:79] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1017 16:22:42.677] W1017 16:22:42.588739   52942 controllermanager.go:526] Skipping "service"
W1017 16:22:42.677] I1017 16:22:42.589372   52942 controllermanager.go:534] Started "endpoint"
W1017 16:22:42.677] I1017 16:22:42.589681   52942 endpoints_controller.go:175] Starting endpoint controller
W1017 16:22:42.677] I1017 16:22:42.589718   52942 shared_informer.go:197] Waiting for caches to sync for endpoint
W1017 16:22:42.678] I1017 16:22:42.602595   52942 controllermanager.go:534] Started "namespace"
W1017 16:22:42.678] I1017 16:22:42.602690   52942 namespace_controller.go:200] Starting namespace controller
... skipping 13 lines ...
W1017 16:22:42.680] I1017 16:22:42.606805   52942 controllermanager.go:534] Started "daemonset"
W1017 16:22:42.680] I1017 16:22:42.606824   52942 daemon_controller.go:267] Starting daemon sets controller
W1017 16:22:42.680] I1017 16:22:42.606828   52942 core.go:211] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W1017 16:22:42.681] W1017 16:22:42.606839   52942 controllermanager.go:526] Skipping "route"
W1017 16:22:42.681] I1017 16:22:42.606850   52942 shared_informer.go:197] Waiting for caches to sync for daemon sets
W1017 16:22:42.681] I1017 16:22:42.607221   52942 node_lifecycle_controller.go:77] Sending events to api server
W1017 16:22:42.681] E1017 16:22:42.607259   52942 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W1017 16:22:42.681] W1017 16:22:42.607267   52942 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W1017 16:22:42.682] I1017 16:22:42.607735   52942 controllermanager.go:534] Started "clusterrole-aggregation"
W1017 16:22:42.682] I1017 16:22:42.607856   52942 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W1017 16:22:42.682] I1017 16:22:42.607882   52942 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
W1017 16:22:42.682] I1017 16:22:42.608032   52942 controllermanager.go:534] Started "podgc"
W1017 16:22:42.682] I1017 16:22:42.608120   52942 gc_controller.go:75] Starting GC controller
... skipping 20 lines ...
W1017 16:22:42.686] I1017 16:22:42.615890   52942 shared_informer.go:197] Waiting for caches to sync for HPA
W1017 16:22:42.687] I1017 16:22:42.670002   52942 shared_informer.go:204] Caches are synced for certificate-csrapproving 
W1017 16:22:42.687] I1017 16:22:42.680577   52942 shared_informer.go:204] Caches are synced for service account 
W1017 16:22:42.687] I1017 16:22:42.683592   49408 controller.go:606] quota admission added evaluator for: serviceaccounts
W1017 16:22:42.704] I1017 16:22:42.704021   52942 shared_informer.go:204] Caches are synced for namespace 
W1017 16:22:42.708] I1017 16:22:42.708128   52942 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W1017 16:22:42.720] E1017 16:22:42.719778   52942 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1017 16:22:42.725] E1017 16:22:42.725156   52942 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W1017 16:22:42.733] E1017 16:22:42.732462   52942 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1017 16:22:42.816] W1017 16:22:42.816074   52942 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1017 16:22:42.909] I1017 16:22:42.909056   52942 shared_informer.go:204] Caches are synced for TTL 
W1017 16:22:42.970] I1017 16:22:42.969834   52942 shared_informer.go:204] Caches are synced for PVC protection 
W1017 16:22:42.971] I1017 16:22:42.969869   52942 shared_informer.go:204] Caches are synced for disruption 
W1017 16:22:42.971] I1017 16:22:42.970029   52942 disruption.go:341] Sending events to api server.
W1017 16:22:42.971] I1017 16:22:42.970143   52942 shared_informer.go:204] Caches are synced for deployment 
W1017 16:22:42.972] I1017 16:22:42.970921   52942 shared_informer.go:204] Caches are synced for ReplicaSet 
... skipping 89 lines ...
I1017 16:22:46.482] +++ working dir: /go/src/k8s.io/kubernetes
I1017 16:22:46.485] +++ command: run_RESTMapper_evaluation_tests
I1017 16:22:46.498] +++ [1017 16:22:46] Creating namespace namespace-1571329366-29714
I1017 16:22:46.582] namespace/namespace-1571329366-29714 created
I1017 16:22:46.671] Context "test" modified.
I1017 16:22:46.679] +++ [1017 16:22:46] Testing RESTMapper
I1017 16:22:46.805] +++ [1017 16:22:46] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1017 16:22:46.821] +++ exit code: 0
I1017 16:22:46.957] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1017 16:22:46.958] bindings                                                                      true         Binding
I1017 16:22:46.958] componentstatuses                 cs                                          false        ComponentStatus
I1017 16:22:46.958] configmaps                        cm                                          true         ConfigMap
I1017 16:22:46.958] endpoints                         ep                                          true         Endpoints
... skipping 317 lines ...
I1017 16:23:01.057] (Bcore.sh:79: Successful get pods/valid-pod {{.metadata.name}}: valid-pod
I1017 16:23:01.156] (Bcore.sh:81: Successful get pods {.items[*].metadata.name}: valid-pod
I1017 16:23:01.251] (Bcore.sh:82: Successful get pod valid-pod {.metadata.name}: valid-pod
I1017 16:23:01.349] (Bcore.sh:83: Successful get pod/valid-pod {.metadata.name}: valid-pod
I1017 16:23:01.443] (Bcore.sh:84: Successful get pods/valid-pod {.metadata.name}: valid-pod
I1017 16:23:01.550] (B
I1017 16:23:01.555] core.sh:86: FAIL!
I1017 16:23:01.555] Describe pods valid-pod
I1017 16:23:01.556]   Expected Match: Name:
I1017 16:23:01.556]   Not found in:
I1017 16:23:01.556] Name:         valid-pod
I1017 16:23:01.556] Namespace:    namespace-1571329379-20086
I1017 16:23:01.556] Priority:     0
... skipping 108 lines ...
I1017 16:23:01.910] QoS Class:        Guaranteed
I1017 16:23:01.910] Node-Selectors:   <none>
I1017 16:23:01.910] Tolerations:      <none>
I1017 16:23:01.910] Events:           <none>
I1017 16:23:01.910] (B
I1017 16:23:02.020] 
I1017 16:23:02.020] FAIL!
I1017 16:23:02.021] Describe pods
I1017 16:23:02.022]   Expected Match: Name:
I1017 16:23:02.022]   Not found in:
I1017 16:23:02.022] Name:         valid-pod
I1017 16:23:02.022] Namespace:    namespace-1571329379-20086
I1017 16:23:02.022] Priority:     0
... skipping 158 lines ...
I1017 16:23:06.570] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:23:06.780] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:23:06.892] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:23:07.098] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:23:07.204] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:23:07.296] (Bpod "valid-pod" force deleted
W1017 16:23:07.397] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1017 16:23:07.397] error: setting 'all' parameter but found a non empty selector. 
W1017 16:23:07.398] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 16:23:07.498] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:07.507] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I1017 16:23:07.586] (Bnamespace/test-kubectl-describe-pod created
I1017 16:23:07.692] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I1017 16:23:07.789] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I1017 16:23:08.894] (Bpoddisruptionbudget.policy/test-pdb-3 created
I1017 16:23:09.004] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1017 16:23:09.083] (Bpoddisruptionbudget.policy/test-pdb-4 created
I1017 16:23:09.203] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1017 16:23:09.390] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:09.600] (Bpod/env-test-pod created
W1017 16:23:09.701] error: min-available and max-unavailable cannot be both specified
I1017 16:23:09.801] 
I1017 16:23:09.802] core.sh:264: FAIL!
I1017 16:23:09.802] Describe pods --namespace=test-kubectl-describe-pod env-test-pod
I1017 16:23:09.802]   Expected Match: TEST_CMD_1
I1017 16:23:09.802]   Not found in:
I1017 16:23:09.802] Name:         env-test-pod
I1017 16:23:09.802] Namespace:    test-kubectl-describe-pod
I1017 16:23:09.802] Priority:     0
... skipping 23 lines ...
I1017 16:23:09.805] Tolerations:       <none>
I1017 16:23:09.805] Events:            <none>
I1017 16:23:09.805] (B
I1017 16:23:09.805] 264 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1017 16:23:09.805] (B
I1017 16:23:09.835] 
I1017 16:23:09.836] FAIL!
I1017 16:23:09.836] Describe pods --namespace=test-kubectl-describe-pod
I1017 16:23:09.836]   Expected Match: TEST_CMD_1
I1017 16:23:09.836]   Not found in:
I1017 16:23:09.836] Name:         env-test-pod
I1017 16:23:09.836] Namespace:    test-kubectl-describe-pod
I1017 16:23:09.836] Priority:     0
... skipping 35 lines ...
I1017 16:23:10.326] namespace "test-kubectl-describe-pod" deleted
I1017 16:23:15.455] +++ [1017 16:23:15] Creating namespace namespace-1571329395-32286
I1017 16:23:15.532] namespace/namespace-1571329395-32286 created
I1017 16:23:15.608] Context "test" modified.
I1017 16:23:15.705] core.sh:278: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:15.874] (Bpod/valid-pod created
W1017 16:23:16.033] error: the path "test/e2e/testing-manifests/kubectl/redis-master-pod.yaml" does not exist
I1017 16:23:16.134] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: redis-master:valid-pod:, got: valid-pod:
I1017 16:23:16.136] 
I1017 16:23:16.141] core.sh:283: FAIL!
I1017 16:23:16.141] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I1017 16:23:16.142]   Expected: redis-master:valid-pod:
I1017 16:23:16.142]   Got:      valid-pod:
I1017 16:23:16.142] (B
I1017 16:23:16.142] 53 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I1017 16:23:16.142] (B
I1017 16:23:16.236] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: redis-master:valid-pod:, got: valid-pod:
I1017 16:23:16.238] 
I1017 16:23:16.243] core.sh:287: FAIL!
I1017 16:23:16.243] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I1017 16:23:16.243]   Expected: redis-master:valid-pod:
I1017 16:23:16.243]   Got:      valid-pod:
I1017 16:23:16.243] (B
I1017 16:23:16.243] 53 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I1017 16:23:16.244] (B
I1017 16:23:16.337] pod "valid-pod" force deleted
W1017 16:23:16.438] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 16:23:16.438] Error from server (NotFound): pods "redis-master" not found
I1017 16:23:16.539] core.sh:291: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:16.539] (B+++ [1017 16:23:16] Creating namespace namespace-1571329396-9256
I1017 16:23:16.547] namespace/namespace-1571329396-9256 created
I1017 16:23:16.635] Context "test" modified.
I1017 16:23:16.744] core.sh:296: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:16.925] (Bpod/valid-pod created
... skipping 97 lines ...
I1017 16:23:23.896] (Bpod/valid-pod patched
I1017 16:23:23.996] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1017 16:23:24.076] (Bpod/valid-pod patched
I1017 16:23:24.178] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1017 16:23:24.342] (Bpod/valid-pod patched
I1017 16:23:24.446] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1017 16:23:24.622] (B+++ [1017 16:23:24] "kubectl patch with resourceVersion 498" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I1017 16:23:24.868] pod "valid-pod" deleted
I1017 16:23:24.877] pod/valid-pod replaced
I1017 16:23:24.982] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1017 16:23:25.151] (BSuccessful
I1017 16:23:25.152] message:error: --grace-period must have --force specified
I1017 16:23:25.152] has:\-\-grace-period must have \-\-force specified
I1017 16:23:25.326] Successful
I1017 16:23:25.326] message:error: --timeout must have --force specified
I1017 16:23:25.326] has:\-\-timeout must have \-\-force specified
I1017 16:23:25.483] node/node-v1-test created
W1017 16:23:25.584] W1017 16:23:25.483526   52942 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1017 16:23:25.685] node/node-v1-test replaced
I1017 16:23:25.764] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1017 16:23:25.845] (Bnode "node-v1-test" deleted
I1017 16:23:25.952] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1017 16:23:26.249] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I1017 16:23:27.382] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 25 lines ...
I1017 16:23:27.963] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1017 16:23:28.051] (Bpod/valid-pod labeled
W1017 16:23:28.151] Edit cancelled, no changes made.
W1017 16:23:28.152] Edit cancelled, no changes made.
W1017 16:23:28.152] Edit cancelled, no changes made.
W1017 16:23:28.152] Edit cancelled, no changes made.
W1017 16:23:28.152] error: 'name' already has a value (valid-pod), and --overwrite is false
I1017 16:23:28.253] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I1017 16:23:28.261] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:23:28.357] (Bpod "valid-pod" force deleted
W1017 16:23:28.458] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 16:23:28.559] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:28.559] (B+++ [1017 16:23:28] Creating namespace namespace-1571329408-18058
... skipping 82 lines ...
I1017 16:23:35.939] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1017 16:23:35.942] +++ working dir: /go/src/k8s.io/kubernetes
I1017 16:23:35.945] +++ command: run_kubectl_create_error_tests
I1017 16:23:35.958] +++ [1017 16:23:35] Creating namespace namespace-1571329415-20463
I1017 16:23:36.036] namespace/namespace-1571329415-20463 created
I1017 16:23:36.115] Context "test" modified.
I1017 16:23:36.124] +++ [1017 16:23:36] Testing kubectl create with error
W1017 16:23:36.225] Error: must specify one of -f and -k
W1017 16:23:36.225] 
W1017 16:23:36.225] Create a resource from a file or from stdin.
W1017 16:23:36.225] 
W1017 16:23:36.225]  JSON and YAML formats are accepted.
W1017 16:23:36.225] 
W1017 16:23:36.225] Examples:
... skipping 41 lines ...
W1017 16:23:36.231] 
W1017 16:23:36.231] Usage:
W1017 16:23:36.232]   kubectl create -f FILENAME [options]
W1017 16:23:36.232] 
W1017 16:23:36.232] Use "kubectl <command> --help" for more information about a given command.
W1017 16:23:36.232] Use "kubectl options" for a list of global command-line options (applies to all commands).
I1017 16:23:36.409] +++ [1017 16:23:36] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1017 16:23:36.510] kubectl convert is DEPRECATED and will be removed in a future version.
W1017 16:23:36.510] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1017 16:23:36.611] +++ exit code: 0
I1017 16:23:36.652] Recording: run_kubectl_apply_tests
I1017 16:23:36.652] Running command: run_kubectl_apply_tests
I1017 16:23:36.677] 
... skipping 17 lines ...
I1017 16:23:38.455] (Bpod "test-pod" deleted
I1017 16:23:38.706] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W1017 16:23:39.006] I1017 16:23:39.006010   49408 client.go:357] parsed scheme: "endpoint"
W1017 16:23:39.007] I1017 16:23:39.006063   49408 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1017 16:23:39.011] I1017 16:23:39.011011   49408 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I1017 16:23:39.112] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W1017 16:23:39.212] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1017 16:23:39.313] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1017 16:23:39.313] +++ exit code: 0
I1017 16:23:39.314] Recording: run_kubectl_run_tests
I1017 16:23:39.314] Running command: run_kubectl_run_tests
I1017 16:23:39.314] 
I1017 16:23:39.314] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 7 lines ...
I1017 16:23:39.747] (Bjob.batch/pi created
W1017 16:23:39.848] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 16:23:39.849] I1017 16:23:39.738815   49408 controller.go:606] quota admission added evaluator for: jobs.batch
W1017 16:23:39.849] I1017 16:23:39.754330   52942 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571329419-1881", Name:"pi", UID:"164dffca-e6d2-4d36-8dfb-5ced3221de00", APIVersion:"batch/v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-x6s6l
I1017 16:23:39.950] run.sh:33: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: pi:
I1017 16:23:39.992] (B
I1017 16:23:39.993] FAIL!
I1017 16:23:39.993] Describe pods
I1017 16:23:39.993]   Expected Match: Name:
I1017 16:23:39.993]   Not found in:
I1017 16:23:39.993] Name:           pi-x6s6l
I1017 16:23:39.994] Namespace:      namespace-1571329419-1881
I1017 16:23:39.994] Priority:       0
... skipping 83 lines ...
I1017 16:23:42.102] Context "test" modified.
I1017 16:23:42.110] +++ [1017 16:23:42] Testing kubectl create filter
I1017 16:23:42.201] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:42.405] (Bpod/selector-test-pod created
I1017 16:23:42.510] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1017 16:23:42.599] (BSuccessful
I1017 16:23:42.599] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1017 16:23:42.600] has:pods "selector-test-pod-dont-apply" not found
I1017 16:23:42.680] pod "selector-test-pod" deleted
I1017 16:23:42.703] +++ exit code: 0
I1017 16:23:42.738] Recording: run_kubectl_apply_deployments_tests
I1017 16:23:42.738] Running command: run_kubectl_apply_deployments_tests
I1017 16:23:42.762] 
... skipping 25 lines ...
I1017 16:23:44.761] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:44.855] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:44.952] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:23:45.111] (Bdeployment.apps/nginx created
I1017 16:23:45.215] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I1017 16:23:49.436] (BSuccessful
I1017 16:23:49.437] message:Error from server (Conflict): error when applying patch:
I1017 16:23:49.438] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1571329422-25220\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1017 16:23:49.438] to:
I1017 16:23:49.438] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I1017 16:23:49.438] Name: "nginx", Namespace: "namespace-1571329422-25220"
I1017 16:23:49.440] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1571329422-25220\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-10-17T16:23:45Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1571329422-25220" "resourceVersion":"594" "selfLink":"/apis/apps/v1/namespaces/namespace-1571329422-25220/deployments/nginx" "uid":"3abb85ec-0326-478d-95b8-d019d3a60698"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-10-17T16:23:45Z" "lastUpdateTime":"2019-10-17T16:23:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-10-17T16:23:45Z" "lastUpdateTime":"2019-10-17T16:23:45Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I1017 16:23:49.440] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I1017 16:23:49.440] has:Error from server (Conflict)
W1017 16:23:49.541] I1017 16:23:45.113663   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329422-25220", Name:"nginx", UID:"3abb85ec-0326-478d-95b8-d019d3a60698", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W1017 16:23:49.542] I1017 16:23:45.118166   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329422-25220", Name:"nginx-8484dd655", UID:"3bedbab6-b3a2-4131-aaa1-9f1d08b90558", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-jcq2s
W1017 16:23:49.542] I1017 16:23:45.122015   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329422-25220", Name:"nginx-8484dd655", UID:"3bedbab6-b3a2-4131-aaa1-9f1d08b90558", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-m9hrn
W1017 16:23:49.543] I1017 16:23:45.122398   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329422-25220", Name:"nginx-8484dd655", UID:"3bedbab6-b3a2-4131-aaa1-9f1d08b90558", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-8n82m
W1017 16:23:50.305] I1017 16:23:50.304861   52942 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1571329413-19457
I1017 16:23:54.708] deployment.apps/nginx configured
... skipping 146 lines ...
I1017 16:24:02.144] +++ [1017 16:24:02] Creating namespace namespace-1571329442-25768
I1017 16:24:02.223] namespace/namespace-1571329442-25768 created
I1017 16:24:02.300] Context "test" modified.
I1017 16:24:02.309] +++ [1017 16:24:02] Testing kubectl get
I1017 16:24:02.402] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:24:02.493] (BSuccessful
I1017 16:24:02.494] message:Error from server (NotFound): pods "abc" not found
I1017 16:24:02.494] has:pods "abc" not found
I1017 16:24:02.583] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:24:02.674] (BSuccessful
I1017 16:24:02.674] message:Error from server (NotFound): pods "abc" not found
I1017 16:24:02.674] has:pods "abc" not found
I1017 16:24:02.764] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:24:02.851] (BSuccessful
I1017 16:24:02.851] message:{
I1017 16:24:02.852]     "apiVersion": "v1",
I1017 16:24:02.852]     "items": [],
... skipping 23 lines ...
I1017 16:24:03.203] has not:No resources found
I1017 16:24:03.290] Successful
I1017 16:24:03.291] message:NAME
I1017 16:24:03.291] has not:No resources found
I1017 16:24:03.384] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:24:03.485] (BSuccessful
I1017 16:24:03.485] message:error: the server doesn't have a resource type "foobar"
I1017 16:24:03.486] has not:No resources found
I1017 16:24:03.573] Successful
I1017 16:24:03.573] message:No resources found in namespace-1571329442-25768 namespace.
I1017 16:24:03.574] has:No resources found
I1017 16:24:03.664] Successful
I1017 16:24:03.664] message:
I1017 16:24:03.664] has not:No resources found
I1017 16:24:03.758] Successful
I1017 16:24:03.759] message:No resources found in namespace-1571329442-25768 namespace.
I1017 16:24:03.759] has:No resources found
I1017 16:24:03.853] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:24:03.942] (BSuccessful
I1017 16:24:03.943] message:Error from server (NotFound): pods "abc" not found
I1017 16:24:03.943] has:pods "abc" not found
I1017 16:24:03.944] FAIL!
I1017 16:24:03.944] message:Error from server (NotFound): pods "abc" not found
I1017 16:24:03.944] has not:List
I1017 16:24:03.945] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1017 16:24:04.058] Successful
I1017 16:24:04.058] message:I1017 16:24:04.009725   62723 loader.go:375] Config loaded from file:  /tmp/tmp.DN1mQSQXVk/.kube/config
I1017 16:24:04.058] I1017 16:24:04.012018   62723 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I1017 16:24:04.059] I1017 16:24:04.034384   62723 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I1017 16:24:09.681] Successful
I1017 16:24:09.682] message:NAME    DATA   AGE
I1017 16:24:09.682] one     0      0s
I1017 16:24:09.682] three   0      0s
I1017 16:24:09.682] two     0      0s
I1017 16:24:09.682] STATUS    REASON          MESSAGE
I1017 16:24:09.683] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 16:24:09.683] has not:watch is only supported on individual resources
I1017 16:24:10.786] Successful
I1017 16:24:10.787] message:STATUS    REASON          MESSAGE
I1017 16:24:10.787] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 16:24:10.787] has not:watch is only supported on individual resources
I1017 16:24:10.792] +++ [1017 16:24:10] Creating namespace namespace-1571329450-32436
I1017 16:24:10.870] namespace/namespace-1571329450-32436 created
I1017 16:24:10.939] Context "test" modified.
I1017 16:24:11.038] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:24:11.202] (Bpod/valid-pod created
... skipping 56 lines ...
I1017 16:24:11.296] }
I1017 16:24:11.385] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:24:11.642] (B<no value>Successful
I1017 16:24:11.643] message:valid-pod:
I1017 16:24:11.643] has:valid-pod:
I1017 16:24:11.726] Successful
I1017 16:24:11.727] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1017 16:24:11.727] 	template was:
I1017 16:24:11.727] 		{.missing}
I1017 16:24:11.727] 	object given to jsonpath engine was:
I1017 16:24:11.729] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-10-17T16:24:11Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1571329450-32436", "resourceVersion":"697", "selfLink":"/api/v1/namespaces/namespace-1571329450-32436/pods/valid-pod", "uid":"471e1ea1-d453-49f5-8614-759930a132b7"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I1017 16:24:11.729] has:missing is not found
I1017 16:24:11.828] Successful
I1017 16:24:11.828] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1017 16:24:11.828] 	template was:
I1017 16:24:11.828] 		{{.missing}}
I1017 16:24:11.829] 	raw data was:
I1017 16:24:11.829] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-10-17T16:24:11Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1571329450-32436","resourceVersion":"697","selfLink":"/api/v1/namespaces/namespace-1571329450-32436/pods/valid-pod","uid":"471e1ea1-d453-49f5-8614-759930a132b7"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1017 16:24:11.829] 	object given to template engine was:
I1017 16:24:11.830] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-10-17T16:24:11Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1571329450-32436 resourceVersion:697 selfLink:/api/v1/namespaces/namespace-1571329450-32436/pods/valid-pod uid:471e1ea1-d453-49f5-8614-759930a132b7] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I1017 16:24:11.830] has:map has no entry for key "missing"
W1017 16:24:11.931] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I1017 16:24:12.916] Successful
I1017 16:24:12.916] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 16:24:12.916] valid-pod   0/1     Pending   0          0s
I1017 16:24:12.916] STATUS      REASON          MESSAGE
I1017 16:24:12.917] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 16:24:12.917] has:STATUS
I1017 16:24:12.918] Successful
I1017 16:24:12.918] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 16:24:12.919] valid-pod   0/1     Pending   0          0s
I1017 16:24:12.919] STATUS      REASON          MESSAGE
I1017 16:24:12.919] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 16:24:12.919] has:valid-pod
I1017 16:24:14.004] Successful
I1017 16:24:14.004] message:pod/valid-pod
I1017 16:24:14.005] has not:STATUS
I1017 16:24:14.005] Successful
I1017 16:24:14.006] message:pod/valid-pod
... skipping 72 lines ...
I1017 16:24:15.108] status:
I1017 16:24:15.108]   phase: Pending
I1017 16:24:15.108]   qosClass: Guaranteed
I1017 16:24:15.108] ---
I1017 16:24:15.108] has:name: valid-pod
I1017 16:24:15.193] Successful
I1017 16:24:15.193] message:Error from server (NotFound): pods "invalid-pod" not found
I1017 16:24:15.193] has:"invalid-pod" not found
I1017 16:24:15.277] pod "valid-pod" deleted
I1017 16:24:15.380] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:24:15.541] (Bpod/redis-master created
I1017 16:24:15.545] pod/valid-pod created
I1017 16:24:15.644] Successful
... skipping 35 lines ...
I1017 16:24:16.870] +++ command: run_kubectl_exec_pod_tests
I1017 16:24:16.881] +++ [1017 16:24:16] Creating namespace namespace-1571329456-28149
I1017 16:24:16.968] namespace/namespace-1571329456-28149 created
I1017 16:24:17.048] Context "test" modified.
I1017 16:24:17.057] +++ [1017 16:24:17] Testing kubectl exec POD COMMAND
I1017 16:24:17.168] Successful
I1017 16:24:17.169] message:Error from server (NotFound): pods "abc" not found
I1017 16:24:17.169] has:pods "abc" not found
I1017 16:24:17.335] pod/test-pod created
I1017 16:24:17.495] Successful
I1017 16:24:17.496] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 16:24:17.496] has not:pods "test-pod" not found
I1017 16:24:17.500] Successful
I1017 16:24:17.500] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 16:24:17.501] has not:pod or type/name must be specified
I1017 16:24:17.636] pod "test-pod" deleted
I1017 16:24:17.669] +++ exit code: 0
I1017 16:24:17.744] Recording: run_kubectl_exec_resource_name_tests
I1017 16:24:17.745] Running command: run_kubectl_exec_resource_name_tests
I1017 16:24:17.782] 
... skipping 2 lines ...
I1017 16:24:17.797] +++ command: run_kubectl_exec_resource_name_tests
I1017 16:24:17.813] +++ [1017 16:24:17] Creating namespace namespace-1571329457-21514
I1017 16:24:17.939] namespace/namespace-1571329457-21514 created
I1017 16:24:18.064] Context "test" modified.
I1017 16:24:18.076] +++ [1017 16:24:18] Testing kubectl exec TYPE/NAME COMMAND
I1017 16:24:18.243] Successful
I1017 16:24:18.244] message:error: the server doesn't have a resource type "foo"
I1017 16:24:18.244] has:error:
I1017 16:24:18.384] Successful
I1017 16:24:18.384] message:Error from server (NotFound): deployments.apps "bar" not found
I1017 16:24:18.385] has:"bar" not found
I1017 16:24:18.673] pod/test-pod created
I1017 16:24:18.935] replicaset.apps/frontend created
W1017 16:24:19.037] I1017 16:24:18.939715   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329457-21514", Name:"frontend", UID:"e1506783-367d-4fb4-98a9-ec20601d36c7", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v6hzg
W1017 16:24:19.038] I1017 16:24:18.945021   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329457-21514", Name:"frontend", UID:"e1506783-367d-4fb4-98a9-ec20601d36c7", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-x82v8
W1017 16:24:19.039] I1017 16:24:18.945299   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329457-21514", Name:"frontend", UID:"e1506783-367d-4fb4-98a9-ec20601d36c7", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-q8svw
I1017 16:24:19.217] configmap/test-set-env-config created
I1017 16:24:19.368] Successful
I1017 16:24:19.369] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I1017 16:24:19.369] has:not implemented
I1017 16:24:19.536] Successful
I1017 16:24:19.537] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 16:24:19.537] has not:not found
I1017 16:24:19.540] Successful
I1017 16:24:19.541] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 16:24:19.541] has not:pod or type/name must be specified
I1017 16:24:19.726] Successful
I1017 16:24:19.727] message:Error from server (BadRequest): pod frontend-q8svw does not have a host assigned
I1017 16:24:19.727] has not:not found
I1017 16:24:19.729] Successful
I1017 16:24:19.729] message:Error from server (BadRequest): pod frontend-q8svw does not have a host assigned
I1017 16:24:19.730] has not:pod or type/name must be specified
I1017 16:24:19.863] pod "test-pod" deleted
I1017 16:24:19.998] replicaset.apps "frontend" deleted
I1017 16:24:20.139] configmap "test-set-env-config" deleted
I1017 16:24:20.176] +++ exit code: 0
I1017 16:24:20.230] Recording: run_create_secret_tests
I1017 16:24:20.231] Running command: run_create_secret_tests
I1017 16:24:20.275] 
I1017 16:24:20.277] +++ Running case: test-cmd.run_create_secret_tests 
I1017 16:24:20.283] +++ working dir: /go/src/k8s.io/kubernetes
I1017 16:24:20.286] +++ command: run_create_secret_tests
I1017 16:24:20.440] Successful
I1017 16:24:20.441] message:Error from server (NotFound): secrets "mysecret" not found
I1017 16:24:20.441] has:secrets "mysecret" not found
I1017 16:24:20.693] Successful
I1017 16:24:20.694] message:Error from server (NotFound): secrets "mysecret" not found
I1017 16:24:20.694] has:secrets "mysecret" not found
I1017 16:24:20.697] Successful
I1017 16:24:20.697] message:user-specified
I1017 16:24:20.697] has:user-specified
I1017 16:24:20.816] Successful
I1017 16:24:20.965] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"382c72d4-f9dd-41f4-93c6-3aaa4212a7f6","resourceVersion":"772","creationTimestamp":"2019-10-17T16:24:20Z"}}
... skipping 2 lines ...
I1017 16:24:21.173] has:uid
I1017 16:24:21.257] Successful
I1017 16:24:21.258] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"382c72d4-f9dd-41f4-93c6-3aaa4212a7f6","resourceVersion":"773","creationTimestamp":"2019-10-17T16:24:20Z"},"data":{"key1":"config1"}}
I1017 16:24:21.258] has:config1
I1017 16:24:21.336] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"382c72d4-f9dd-41f4-93c6-3aaa4212a7f6"}}
I1017 16:24:21.437] Successful
I1017 16:24:21.437] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I1017 16:24:21.438] has:configmaps "tester-update-cm" not found
I1017 16:24:21.451] +++ exit code: 0
I1017 16:24:21.489] Recording: run_kubectl_create_kustomization_directory_tests
I1017 16:24:21.489] Running command: run_kubectl_create_kustomization_directory_tests
I1017 16:24:21.513] 
I1017 16:24:21.516] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I1017 16:24:24.449] valid-pod   0/1     Pending   0          0s
I1017 16:24:24.449] has:valid-pod
I1017 16:24:25.543] Successful
I1017 16:24:25.543] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 16:24:25.543] valid-pod   0/1     Pending   0          0s
I1017 16:24:25.544] STATUS      REASON          MESSAGE
I1017 16:24:25.544] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 16:24:25.544] has:Timeout exceeded while reading body
I1017 16:24:25.629] Successful
I1017 16:24:25.630] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 16:24:25.630] valid-pod   0/1     Pending   0          1s
I1017 16:24:25.631] has:valid-pod
I1017 16:24:25.701] Successful
I1017 16:24:25.702] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1017 16:24:25.703] has:Invalid timeout value
I1017 16:24:25.781] pod "valid-pod" deleted
I1017 16:24:25.804] +++ exit code: 0
I1017 16:24:25.840] Recording: run_crd_tests
I1017 16:24:25.840] Running command: run_crd_tests
I1017 16:24:25.866] 
... skipping 158 lines ...
I1017 16:24:31.173] foo.company.com/test patched
I1017 16:24:31.270] crd.sh:236: Successful get foos/test {{.patched}}: value1
I1017 16:24:31.353] (Bfoo.company.com/test patched
I1017 16:24:31.453] crd.sh:238: Successful get foos/test {{.patched}}: value2
I1017 16:24:31.541] (Bfoo.company.com/test patched
I1017 16:24:31.641] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I1017 16:24:31.808] (B+++ [1017 16:24:31] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1017 16:24:31.876] {
I1017 16:24:31.877]     "apiVersion": "company.com/v1",
I1017 16:24:31.877]     "kind": "Foo",
I1017 16:24:31.877]     "metadata": {
I1017 16:24:31.878]         "annotations": {
I1017 16:24:31.878]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 190 lines ...
I1017 16:24:51.985] (Bnamespace/non-native-resources created
I1017 16:24:52.148] bar.company.com/test created
I1017 16:24:52.259] crd.sh:455: Successful get bars {{len .items}}: 1
I1017 16:24:52.341] (Bnamespace "non-native-resources" deleted
I1017 16:24:57.557] crd.sh:458: Successful get bars {{len .items}}: 0
I1017 16:24:57.733] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W1017 16:24:57.834] Error from server (NotFound): namespaces "non-native-resources" not found
I1017 16:24:57.935] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I1017 16:24:57.966] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1017 16:24:58.087] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I1017 16:24:58.119] +++ exit code: 0
I1017 16:24:58.158] Recording: run_cmd_with_img_tests
I1017 16:24:58.158] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W1017 16:24:58.495] I1017 16:24:58.494391   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329498-25057", Name:"test1-6cdffdb5b8", UID:"1f259c35-f7a4-4c1a-8631-2939c639290c", APIVersion:"apps/v1", ResourceVersion:"927", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-7gdnz
I1017 16:24:58.595] Successful
I1017 16:24:58.596] message:deployment.apps/test1 created
I1017 16:24:58.596] has:deployment.apps/test1 created
I1017 16:24:58.607] deployment.apps "test1" deleted
I1017 16:24:58.718] Successful
I1017 16:24:58.718] message:error: Invalid image name "InvalidImageName": invalid reference format
I1017 16:24:58.718] has:error: Invalid image name "InvalidImageName": invalid reference format
I1017 16:24:58.734] +++ exit code: 0
I1017 16:24:58.780] +++ [1017 16:24:58] Testing recursive resources
I1017 16:24:58.787] +++ [1017 16:24:58] Creating namespace namespace-1571329498-17280
I1017 16:24:58.867] namespace/namespace-1571329498-17280 created
I1017 16:24:58.947] Context "test" modified.
W1017 16:24:59.048] W1017 16:24:58.742038   49408 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 16:24:59.049] E1017 16:24:58.744125   52942 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:24:59.049] W1017 16:24:58.853094   49408 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 16:24:59.050] E1017 16:24:58.854198   52942 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:24:59.054] W1017 16:24:58.974499   49408 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 16:24:59.055] E1017 16:24:58.976039   52942 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:24:59.095] W1017 16:24:59.095040   49408 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 16:24:59.106] E1017 16:24:59.105434   52942 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:24:59.206] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:24:59.411] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:24:59.416] (BSuccessful
I1017 16:24:59.417] message:pod/busybox0 created
I1017 16:24:59.417] pod/busybox1 created
I1017 16:24:59.417] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 16:24:59.417] has:error validating data: kind not set
I1017 16:24:59.516] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:24:59.718] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1017 16:24:59.721] (BSuccessful
I1017 16:24:59.722] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:24:59.722] has:Object 'Kind' is missing
W1017 16:24:59.823] E1017 16:24:59.745451   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:24:59.856] E1017 16:24:59.855702   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:24:59.957] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:00.140] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1017 16:25:00.144] (BSuccessful
I1017 16:25:00.144] message:pod/busybox0 replaced
I1017 16:25:00.144] pod/busybox1 replaced
I1017 16:25:00.144] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 16:25:00.144] has:error validating data: kind not set
W1017 16:25:00.245] E1017 16:24:59.977874   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:00.246] E1017 16:25:00.106902   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:00.346] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:00.374] (BSuccessful
I1017 16:25:00.374] message:Name:         busybox0
I1017 16:25:00.374] Namespace:    namespace-1571329498-17280
I1017 16:25:00.375] Priority:     0
I1017 16:25:00.375] Node:         <none>
... skipping 159 lines ...
I1017 16:25:00.394] has:Object 'Kind' is missing
I1017 16:25:00.490] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:00.697] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1017 16:25:00.699] (BSuccessful
I1017 16:25:00.700] message:pod/busybox0 annotated
I1017 16:25:00.700] pod/busybox1 annotated
I1017 16:25:00.700] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:25:00.700] has:Object 'Kind' is missing
I1017 16:25:00.809] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:01.088] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1017 16:25:01.091] (BSuccessful
I1017 16:25:01.091] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1017 16:25:01.092] pod/busybox0 configured
I1017 16:25:01.092] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1017 16:25:01.092] pod/busybox1 configured
I1017 16:25:01.092] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 16:25:01.092] has:error validating data: kind not set
I1017 16:25:01.183] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:01.342] (Bdeployment.apps/nginx created
W1017 16:25:01.442] E1017 16:25:00.747156   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:01.443] E1017 16:25:00.857088   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:01.443] E1017 16:25:00.979905   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:01.443] E1017 16:25:01.108608   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:01.444] I1017 16:25:01.344924   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329498-17280", Name:"nginx", UID:"c472b74a-e755-45d0-831b-52e93ecf5da2", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1017 16:25:01.444] I1017 16:25:01.348527   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329498-17280", Name:"nginx-f87d999f7", UID:"70037c9f-2c0e-4537-b826-088b74a07a95", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-wxp7n
W1017 16:25:01.444] I1017 16:25:01.352266   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329498-17280", Name:"nginx-f87d999f7", UID:"70037c9f-2c0e-4537-b826-088b74a07a95", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-58fxz
W1017 16:25:01.445] I1017 16:25:01.352626   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329498-17280", Name:"nginx-f87d999f7", UID:"70037c9f-2c0e-4537-b826-088b74a07a95", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-bt66f
I1017 16:25:01.545] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 16:25:01.549] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 41 lines ...
I1017 16:25:01.733]       terminationGracePeriodSeconds: 30
I1017 16:25:01.733] status: {}
I1017 16:25:01.733] has:extensions/v1beta1
I1017 16:25:01.812] deployment.apps "nginx" deleted
W1017 16:25:01.912] kubectl convert is DEPRECATED and will be removed in a future version.
W1017 16:25:01.913] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1017 16:25:01.913] E1017 16:25:01.749051   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:01.914] E1017 16:25:01.858588   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:01.982] E1017 16:25:01.981474   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:02.082] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:02.103] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:02.105] (BSuccessful
I1017 16:25:02.105] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1017 16:25:02.105] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1017 16:25:02.106] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:25:02.106] has:Object 'Kind' is missing
I1017 16:25:02.199] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:02.292] (BSuccessful
I1017 16:25:02.293] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:25:02.293] has:busybox0:busybox1:
I1017 16:25:02.296] Successful
I1017 16:25:02.296] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:25:02.296] has:Object 'Kind' is missing
I1017 16:25:02.391] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:02.484] (Bpod/busybox0 labeled
I1017 16:25:02.484] pod/busybox1 labeled
I1017 16:25:02.485] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:25:02.585] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1017 16:25:02.588] (BSuccessful
I1017 16:25:02.590] message:pod/busybox0 labeled
I1017 16:25:02.590] pod/busybox1 labeled
I1017 16:25:02.590] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:25:02.590] has:Object 'Kind' is missing
I1017 16:25:02.687] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:02.779] (Bpod/busybox0 patched
I1017 16:25:02.780] pod/busybox1 patched
I1017 16:25:02.780] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:25:02.886] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1017 16:25:02.889] (BSuccessful
I1017 16:25:02.889] message:pod/busybox0 patched
I1017 16:25:02.889] pod/busybox1 patched
I1017 16:25:02.889] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:25:02.890] has:Object 'Kind' is missing
I1017 16:25:02.984] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:03.178] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:03.181] (BSuccessful
I1017 16:25:03.181] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 16:25:03.182] pod "busybox0" force deleted
I1017 16:25:03.182] pod "busybox1" force deleted
I1017 16:25:03.182] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 16:25:03.183] has:Object 'Kind' is missing
I1017 16:25:03.281] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:03.448] (Breplicationcontroller/busybox0 created
I1017 16:25:03.454] replicationcontroller/busybox1 created
W1017 16:25:03.554] E1017 16:25:02.109882   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:03.555] I1017 16:25:02.444912   52942 namespace_controller.go:185] Namespace has been deleted non-native-resources
W1017 16:25:03.556] E1017 16:25:02.751475   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:03.556] E1017 16:25:02.860871   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:03.557] E1017 16:25:02.983357   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:03.557] E1017 16:25:03.111109   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:03.557] I1017 16:25:03.451946   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329498-17280", Name:"busybox0", UID:"4da4169d-efc8-451a-94ac-ac2de5074f5c", APIVersion:"v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-m5kp4
W1017 16:25:03.558] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1017 16:25:03.558] I1017 16:25:03.458163   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329498-17280", Name:"busybox1", UID:"9234908c-73b4-4bb4-a029-38feb767edc0", APIVersion:"v1", ResourceVersion:"985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-kd4l6
I1017 16:25:03.659] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:03.660] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:03.757] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I1017 16:25:03.852] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I1017 16:25:04.036] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1017 16:25:04.131] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1017 16:25:04.133] (BSuccessful
I1017 16:25:04.134] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1017 16:25:04.134] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1017 16:25:04.135] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:04.135] has:Object 'Kind' is missing
I1017 16:25:04.224] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1017 16:25:04.312] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1017 16:25:04.415] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:04.509] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I1017 16:25:04.608] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I1017 16:25:04.808] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1017 16:25:04.906] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1017 16:25:04.909] (BSuccessful
I1017 16:25:04.909] message:service/busybox0 exposed
I1017 16:25:04.910] service/busybox1 exposed
I1017 16:25:04.910] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:04.911] has:Object 'Kind' is missing
I1017 16:25:05.009] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:05.114] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I1017 16:25:05.232] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I1017 16:25:05.455] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I1017 16:25:05.555] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I1017 16:25:05.558] (BSuccessful
I1017 16:25:05.559] message:replicationcontroller/busybox0 scaled
I1017 16:25:05.559] replicationcontroller/busybox1 scaled
I1017 16:25:05.560] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:05.560] has:Object 'Kind' is missing
I1017 16:25:05.654] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:05.857] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:05.860] (BSuccessful
I1017 16:25:05.861] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 16:25:05.861] replicationcontroller "busybox0" force deleted
I1017 16:25:05.861] replicationcontroller "busybox1" force deleted
I1017 16:25:05.862] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:05.862] has:Object 'Kind' is missing
I1017 16:25:05.956] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:06.147] (Bdeployment.apps/nginx1-deployment created
I1017 16:25:06.165] deployment.apps/nginx0-deployment created
W1017 16:25:06.265] E1017 16:25:03.752844   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.266] E1017 16:25:03.862251   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.267] E1017 16:25:03.984965   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.267] E1017 16:25:04.112653   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.267] E1017 16:25:04.754237   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.268] E1017 16:25:04.863803   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.268] E1017 16:25:04.986317   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.268] E1017 16:25:05.114098   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.269] I1017 16:25:05.343015   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329498-17280", Name:"busybox0", UID:"4da4169d-efc8-451a-94ac-ac2de5074f5c", APIVersion:"v1", ResourceVersion:"1004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-xwbxg
W1017 16:25:06.269] I1017 16:25:05.355966   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329498-17280", Name:"busybox1", UID:"9234908c-73b4-4bb4-a029-38feb767edc0", APIVersion:"v1", ResourceVersion:"1009", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-nzwdb
W1017 16:25:06.269] E1017 16:25:05.755809   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.269] E1017 16:25:05.865498   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.269] E1017 16:25:05.988284   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.270] E1017 16:25:06.115791   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.270] I1017 16:25:06.150974   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329498-17280", Name:"nginx1-deployment", UID:"cd44c8f5-4aaf-4fb7-aa36-30c2e78c126c", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W1017 16:25:06.270] I1017 16:25:06.155407   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329498-17280", Name:"nginx0-deployment", UID:"74856a9c-b7c9-4426-aa27-0d88776c64a6", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W1017 16:25:06.271] I1017 16:25:06.155846   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329498-17280", Name:"nginx1-deployment-7bdbbfb5cf", UID:"89ab6243-1853-4487-b578-38815db4a943", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-nhgjg
W1017 16:25:06.271] I1017 16:25:06.158890   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329498-17280", Name:"nginx0-deployment-57c6bff7f6", UID:"448ae92c-12c9-4dec-b26a-ddd754d8920d", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-jcrbw
W1017 16:25:06.271] I1017 16:25:06.159307   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329498-17280", Name:"nginx1-deployment-7bdbbfb5cf", UID:"89ab6243-1853-4487-b578-38815db4a943", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-hxwv5
W1017 16:25:06.272] I1017 16:25:06.161868   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329498-17280", Name:"nginx0-deployment-57c6bff7f6", UID:"448ae92c-12c9-4dec-b26a-ddd754d8920d", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-9ntsd
W1017 16:25:06.272] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 16:25:06.372] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1017 16:25:06.404] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1017 16:25:06.633] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1017 16:25:06.633] (BSuccessful
I1017 16:25:06.634] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I1017 16:25:06.634] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I1017 16:25:06.634] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 16:25:06.635] has:Object 'Kind' is missing
I1017 16:25:06.753] deployment.apps/nginx1-deployment paused
I1017 16:25:06.760] deployment.apps/nginx0-deployment paused
W1017 16:25:06.861] E1017 16:25:06.757056   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:06.868] E1017 16:25:06.867368   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:06.968] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1017 16:25:06.969] (BSuccessful
I1017 16:25:06.970] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 16:25:06.970] has:Object 'Kind' is missing
I1017 16:25:07.022] deployment.apps/nginx1-deployment resumed
I1017 16:25:07.026] deployment.apps/nginx0-deployment resumed
W1017 16:25:07.128] E1017 16:25:06.990173   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:07.128] E1017 16:25:07.117405   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:07.229] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I1017 16:25:07.229] (BSuccessful
I1017 16:25:07.230] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 16:25:07.230] has:Object 'Kind' is missing
I1017 16:25:07.276] Successful
I1017 16:25:07.277] message:deployment.apps/nginx1-deployment 
I1017 16:25:07.277] REVISION  CHANGE-CAUSE
I1017 16:25:07.277] 1         <none>
I1017 16:25:07.277] 
I1017 16:25:07.277] deployment.apps/nginx0-deployment 
I1017 16:25:07.277] REVISION  CHANGE-CAUSE
I1017 16:25:07.277] 1         <none>
I1017 16:25:07.278] 
I1017 16:25:07.278] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 16:25:07.278] has:nginx0-deployment
I1017 16:25:07.280] Successful
I1017 16:25:07.280] message:deployment.apps/nginx1-deployment 
I1017 16:25:07.280] REVISION  CHANGE-CAUSE
I1017 16:25:07.280] 1         <none>
I1017 16:25:07.280] 
I1017 16:25:07.281] deployment.apps/nginx0-deployment 
I1017 16:25:07.281] REVISION  CHANGE-CAUSE
I1017 16:25:07.281] 1         <none>
I1017 16:25:07.281] 
I1017 16:25:07.281] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 16:25:07.282] has:nginx1-deployment
I1017 16:25:07.282] Successful
I1017 16:25:07.283] message:deployment.apps/nginx1-deployment 
I1017 16:25:07.283] REVISION  CHANGE-CAUSE
I1017 16:25:07.283] 1         <none>
I1017 16:25:07.283] 
I1017 16:25:07.283] deployment.apps/nginx0-deployment 
I1017 16:25:07.283] REVISION  CHANGE-CAUSE
I1017 16:25:07.283] 1         <none>
I1017 16:25:07.283] 
I1017 16:25:07.284] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 16:25:07.284] has:Object 'Kind' is missing
I1017 16:25:07.383] deployment.apps "nginx1-deployment" force deleted
I1017 16:25:07.388] deployment.apps "nginx0-deployment" force deleted
W1017 16:25:07.489] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 16:25:07.490] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W1017 16:25:07.759] E1017 16:25:07.758955   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:07.870] E1017 16:25:07.869062   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:07.993] E1017 16:25:07.992116   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:08.119] E1017 16:25:08.118701   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:08.501] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:08.678] (Breplicationcontroller/busybox0 created
I1017 16:25:08.682] replicationcontroller/busybox1 created
W1017 16:25:08.783] I1017 16:25:08.681745   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329498-17280", Name:"busybox0", UID:"27127845-f18b-449a-83e7-762789048c94", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-ppt98
W1017 16:25:08.784] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1017 16:25:08.784] I1017 16:25:08.686084   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329498-17280", Name:"busybox1", UID:"92ae700d-4570-4635-8a3f-a707dc29bbf9", APIVersion:"v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-nvx8z
W1017 16:25:08.785] E1017 16:25:08.761231   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:08.871] E1017 16:25:08.870431   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:08.971] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 16:25:08.972] (BSuccessful
I1017 16:25:08.972] message:no rollbacker has been implemented for "ReplicationController"
I1017 16:25:08.972] no rollbacker has been implemented for "ReplicationController"
I1017 16:25:08.973] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:08.973] has:no rollbacker has been implemented for "ReplicationController"
I1017 16:25:08.973] Successful
I1017 16:25:08.973] message:no rollbacker has been implemented for "ReplicationController"
I1017 16:25:08.973] no rollbacker has been implemented for "ReplicationController"
I1017 16:25:08.974] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:08.974] has:Object 'Kind' is missing
I1017 16:25:09.011] Successful
I1017 16:25:09.012] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:09.012] error: replicationcontrollers "busybox0" pausing is not supported
I1017 16:25:09.012] error: replicationcontrollers "busybox1" pausing is not supported
I1017 16:25:09.012] has:Object 'Kind' is missing
I1017 16:25:09.014] Successful
I1017 16:25:09.015] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:09.015] error: replicationcontrollers "busybox0" pausing is not supported
I1017 16:25:09.015] error: replicationcontrollers "busybox1" pausing is not supported
I1017 16:25:09.015] has:replicationcontrollers "busybox0" pausing is not supported
I1017 16:25:09.017] Successful
I1017 16:25:09.018] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:09.018] error: replicationcontrollers "busybox0" pausing is not supported
I1017 16:25:09.018] error: replicationcontrollers "busybox1" pausing is not supported
I1017 16:25:09.018] has:replicationcontrollers "busybox1" pausing is not supported
W1017 16:25:09.119] E1017 16:25:08.993738   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:09.122] E1017 16:25:09.121444   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:09.203] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 16:25:09.222] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:09.322] Successful
I1017 16:25:09.323] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:09.323] error: replicationcontrollers "busybox0" resuming is not supported
I1017 16:25:09.324] error: replicationcontrollers "busybox1" resuming is not supported
I1017 16:25:09.324] has:Object 'Kind' is missing
I1017 16:25:09.324] Successful
I1017 16:25:09.324] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:09.325] error: replicationcontrollers "busybox0" resuming is not supported
I1017 16:25:09.325] error: replicationcontrollers "busybox1" resuming is not supported
I1017 16:25:09.325] has:replicationcontrollers "busybox0" resuming is not supported
I1017 16:25:09.325] Successful
I1017 16:25:09.325] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 16:25:09.326] error: replicationcontrollers "busybox0" resuming is not supported
I1017 16:25:09.326] error: replicationcontrollers "busybox1" resuming is not supported
I1017 16:25:09.326] has:replicationcontrollers "busybox1" resuming is not supported
I1017 16:25:09.326] replicationcontroller "busybox0" force deleted
I1017 16:25:09.326] replicationcontroller "busybox1" force deleted
W1017 16:25:09.763] E1017 16:25:09.762839   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:09.873] E1017 16:25:09.871983   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:09.996] E1017 16:25:09.995563   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:10.123] E1017 16:25:10.122833   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:10.231] Recording: run_namespace_tests
I1017 16:25:10.231] Running command: run_namespace_tests
I1017 16:25:10.259] 
I1017 16:25:10.263] +++ Running case: test-cmd.run_namespace_tests 
I1017 16:25:10.266] +++ working dir: /go/src/k8s.io/kubernetes
I1017 16:25:10.270] +++ command: run_namespace_tests
I1017 16:25:10.280] +++ [1017 16:25:10] Testing kubectl(v1:namespaces)
I1017 16:25:10.370] namespace/my-namespace created
I1017 16:25:10.475] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1017 16:25:10.561] (Bnamespace "my-namespace" deleted
W1017 16:25:10.765] E1017 16:25:10.764316   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:10.874] E1017 16:25:10.873874   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:10.998] E1017 16:25:10.997330   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:11.124] E1017 16:25:11.124193   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:11.767] E1017 16:25:11.766129   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:11.876] E1017 16:25:11.875914   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:12.000] E1017 16:25:11.999120   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:12.126] E1017 16:25:12.125970   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:12.768] E1017 16:25:12.767819   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:12.878] E1017 16:25:12.877419   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:13.002] E1017 16:25:13.001376   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:13.128] E1017 16:25:13.127831   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:13.770] E1017 16:25:13.769648   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:13.879] E1017 16:25:13.879204   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:14.003] E1017 16:25:14.003198   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:14.130] E1017 16:25:14.129423   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:14.772] E1017 16:25:14.771703   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:14.881] E1017 16:25:14.880950   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:15.005] E1017 16:25:15.004868   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:15.086] I1017 16:25:15.086009   52942 shared_informer.go:197] Waiting for caches to sync for resource quota
W1017 16:25:15.087] I1017 16:25:15.086073   52942 shared_informer.go:204] Caches are synced for resource quota 
W1017 16:25:15.131] E1017 16:25:15.131082   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:15.498] I1017 16:25:15.497789   52942 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1017 16:25:15.498] I1017 16:25:15.497865   52942 shared_informer.go:204] Caches are synced for garbage collector 
I1017 16:25:15.673] namespace/my-namespace condition met
I1017 16:25:15.765] Successful
I1017 16:25:15.765] message:Error from server (NotFound): namespaces "my-namespace" not found
I1017 16:25:15.766] has: not found
I1017 16:25:15.843] namespace/my-namespace created
W1017 16:25:15.945] E1017 16:25:15.773155   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:15.946] E1017 16:25:15.882687   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:16.007] E1017 16:25:16.006876   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:16.108] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1017 16:25:16.192] (BSuccessful
I1017 16:25:16.193] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1017 16:25:16.193] namespace "kube-node-lease" deleted
I1017 16:25:16.193] namespace "my-namespace" deleted
I1017 16:25:16.193] namespace "namespace-1571329363-27488" deleted
... skipping 27 lines ...
I1017 16:25:16.198] namespace "namespace-1571329462-7766" deleted
I1017 16:25:16.198] namespace "namespace-1571329463-32729" deleted
I1017 16:25:16.198] namespace "namespace-1571329465-25571" deleted
I1017 16:25:16.199] namespace "namespace-1571329467-3507" deleted
I1017 16:25:16.199] namespace "namespace-1571329498-17280" deleted
I1017 16:25:16.199] namespace "namespace-1571329498-25057" deleted
I1017 16:25:16.199] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1017 16:25:16.199] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1017 16:25:16.200] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1017 16:25:16.200] has:warning: deleting cluster-scoped resources
I1017 16:25:16.200] Successful
I1017 16:25:16.200] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1017 16:25:16.200] namespace "kube-node-lease" deleted
I1017 16:25:16.201] namespace "my-namespace" deleted
I1017 16:25:16.201] namespace "namespace-1571329363-27488" deleted
... skipping 27 lines ...
I1017 16:25:16.205] namespace "namespace-1571329462-7766" deleted
I1017 16:25:16.205] namespace "namespace-1571329463-32729" deleted
I1017 16:25:16.206] namespace "namespace-1571329465-25571" deleted
I1017 16:25:16.206] namespace "namespace-1571329467-3507" deleted
I1017 16:25:16.206] namespace "namespace-1571329498-17280" deleted
I1017 16:25:16.206] namespace "namespace-1571329498-25057" deleted
I1017 16:25:16.206] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1017 16:25:16.207] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1017 16:25:16.207] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1017 16:25:16.207] has:namespace "my-namespace" deleted
W1017 16:25:16.308] E1017 16:25:16.132181   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:16.408] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I1017 16:25:16.414] (Bnamespace/other created
I1017 16:25:16.527] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I1017 16:25:16.640] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:16.833] (Bpod/valid-pod created
W1017 16:25:16.934] E1017 16:25:16.774783   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:16.935] E1017 16:25:16.884775   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:17.009] E1017 16:25:17.008424   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:17.109] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:25:17.110] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:25:17.175] (BSuccessful
I1017 16:25:17.175] message:error: a resource cannot be retrieved by name across all namespaces
I1017 16:25:17.175] has:a resource cannot be retrieved by name across all namespaces
I1017 16:25:17.280] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 16:25:17.392] (Bpod "valid-pod" force deleted
W1017 16:25:17.493] E1017 16:25:17.133824   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:17.493] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 16:25:17.594] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:17.601] (Bnamespace "other" deleted
W1017 16:25:17.780] E1017 16:25:17.776972   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:17.887] E1017 16:25:17.886361   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:18.010] E1017 16:25:18.010036   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:18.136] E1017 16:25:18.135500   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:18.779] E1017 16:25:18.778839   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:18.888] E1017 16:25:18.887883   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:18.934] I1017 16:25:18.933435   52942 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1571329498-17280
W1017 16:25:18.938] I1017 16:25:18.937807   52942 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1571329498-17280
W1017 16:25:19.012] E1017 16:25:19.011781   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:19.138] E1017 16:25:19.137822   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:19.781] E1017 16:25:19.780412   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:19.890] E1017 16:25:19.889353   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:20.014] E1017 16:25:20.013719   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:20.140] E1017 16:25:20.139716   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:20.782] E1017 16:25:20.781616   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:20.892] E1017 16:25:20.891448   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:21.016] E1017 16:25:21.015861   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:21.143] E1017 16:25:21.143187   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:21.788] E1017 16:25:21.787407   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:21.893] E1017 16:25:21.893012   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:22.018] E1017 16:25:22.017442   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:22.145] E1017 16:25:22.144778   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:22.717] +++ exit code: 0
I1017 16:25:22.756] Recording: run_secrets_test
I1017 16:25:22.757] Running command: run_secrets_test
I1017 16:25:22.783] 
I1017 16:25:22.786] +++ Running case: test-cmd.run_secrets_test 
I1017 16:25:22.789] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 48 lines ...
I1017 16:25:23.845] (Bsecret "test-secret" deleted
I1017 16:25:23.948] core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:24.027] (Bsecret/test-secret created
I1017 16:25:24.122] core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1017 16:25:24.218] (Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
I1017 16:25:24.391] (Bsecret "test-secret" deleted
W1017 16:25:24.491] E1017 16:25:22.792063   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:24.492] E1017 16:25:22.894131   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:24.492] E1017 16:25:23.019096   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:24.492] I1017 16:25:23.036923   68866 loader.go:375] Config loaded from file:  /tmp/tmp.DN1mQSQXVk/.kube/config
W1017 16:25:24.492] E1017 16:25:23.145942   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:24.493] E1017 16:25:23.793809   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:24.493] E1017 16:25:23.895617   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:24.493] E1017 16:25:24.020695   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:24.493] E1017 16:25:24.147409   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:24.594] core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:24.594] (Bsecret/test-secret created
I1017 16:25:24.670] core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1017 16:25:24.765] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I1017 16:25:24.843] (Bsecret "test-secret" deleted
I1017 16:25:24.926] secret/test-secret created
I1017 16:25:25.025] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1017 16:25:25.121] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I1017 16:25:25.202] (Bsecret "test-secret" deleted
W1017 16:25:25.303] E1017 16:25:24.795313   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:25.303] E1017 16:25:24.897028   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:25.303] E1017 16:25:25.022184   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:25.304] E1017 16:25:25.149148   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:25.404] secret/secret-string-data created
I1017 16:25:25.468] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1017 16:25:25.565] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1017 16:25:25.658] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I1017 16:25:25.744] (Bsecret "secret-string-data" deleted
I1017 16:25:25.849] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:26.015] (Bsecret "test-secret" deleted
I1017 16:25:26.106] namespace "test-secrets" deleted
W1017 16:25:26.207] I1017 16:25:25.793565   52942 namespace_controller.go:185] Namespace has been deleted my-namespace
W1017 16:25:26.207] E1017 16:25:25.796862   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:26.208] E1017 16:25:25.898300   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:26.208] E1017 16:25:26.023734   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:26.208] E1017 16:25:26.150488   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:26.260] I1017 16:25:26.259420   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329363-27488
W1017 16:25:26.268] I1017 16:25:26.268111   52942 namespace_controller.go:185] Namespace has been deleted kube-node-lease
W1017 16:25:26.269] I1017 16:25:26.268946   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329366-29714
W1017 16:25:26.282] I1017 16:25:26.281993   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329369-71
W1017 16:25:26.284] I1017 16:25:26.283668   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329379-20086
W1017 16:25:26.284] I1017 16:25:26.284371   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329379-27062
... skipping 15 lines ...
W1017 16:25:26.753] I1017 16:25:26.752695   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329440-12628
W1017 16:25:26.765] I1017 16:25:26.764978   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329441-28040
W1017 16:25:26.769] I1017 16:25:26.768155   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329462-29738
W1017 16:25:26.773] I1017 16:25:26.772684   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329442-25768
W1017 16:25:26.789] I1017 16:25:26.787034   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329456-28149
W1017 16:25:26.794] I1017 16:25:26.793276   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329450-32436
W1017 16:25:26.799] E1017 16:25:26.798407   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:26.802] I1017 16:25:26.802276   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329457-21514
W1017 16:25:26.810] I1017 16:25:26.810102   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329422-25220
W1017 16:25:26.829] I1017 16:25:26.828926   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329462-7766
W1017 16:25:26.900] E1017 16:25:26.900019   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:26.913] I1017 16:25:26.912679   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329463-32729
W1017 16:25:26.946] I1017 16:25:26.945447   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329465-25571
W1017 16:25:26.949] I1017 16:25:26.948408   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329467-3507
W1017 16:25:26.956] I1017 16:25:26.956209   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329498-25057
W1017 16:25:26.999] I1017 16:25:26.998509   52942 namespace_controller.go:185] Namespace has been deleted namespace-1571329498-17280
W1017 16:25:27.026] E1017 16:25:27.025472   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:27.152] E1017 16:25:27.151755   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:27.688] I1017 16:25:27.687776   52942 namespace_controller.go:185] Namespace has been deleted other
W1017 16:25:27.800] E1017 16:25:27.799874   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:27.902] E1017 16:25:27.901909   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:28.027] E1017 16:25:28.027051   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:28.154] E1017 16:25:28.153236   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:28.803] E1017 16:25:28.802635   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:28.904] E1017 16:25:28.903984   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:29.031] E1017 16:25:29.030302   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:29.155] E1017 16:25:29.154778   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:29.804] E1017 16:25:29.804157   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:29.906] E1017 16:25:29.905787   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:30.033] E1017 16:25:30.032040   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:30.156] E1017 16:25:30.156269   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:30.806] E1017 16:25:30.805957   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:30.908] E1017 16:25:30.907417   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:31.034] E1017 16:25:31.033949   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:31.157] E1017 16:25:31.157246   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:31.258] +++ exit code: 0
I1017 16:25:31.263] Recording: run_configmap_tests
I1017 16:25:31.263] Running command: run_configmap_tests
I1017 16:25:31.289] 
I1017 16:25:31.292] +++ Running case: test-cmd.run_configmap_tests 
I1017 16:25:31.295] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 2 lines ...
I1017 16:25:31.385] namespace/namespace-1571329531-8933 created
I1017 16:25:31.458] Context "test" modified.
I1017 16:25:31.467] +++ [1017 16:25:31] Testing configmaps
I1017 16:25:31.664] configmap/test-configmap created
I1017 16:25:31.766] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I1017 16:25:31.844] (Bconfigmap "test-configmap" deleted
W1017 16:25:31.945] E1017 16:25:31.807327   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:31.946] E1017 16:25:31.908770   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:32.035] E1017 16:25:32.034999   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:32.136] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I1017 16:25:32.136] (Bnamespace/test-configmaps created
I1017 16:25:32.137] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
I1017 16:25:32.226] (Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
I1017 16:25:32.321] (Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
I1017 16:25:32.400] (Bconfigmap/test-configmap created
I1017 16:25:32.482] configmap/test-binary-configmap created
I1017 16:25:32.579] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I1017 16:25:32.670] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I1017 16:25:32.922] (Bconfigmap "test-configmap" deleted
I1017 16:25:33.004] configmap "test-binary-configmap" deleted
I1017 16:25:33.094] namespace "test-configmaps" deleted
W1017 16:25:33.195] E1017 16:25:32.158843   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:33.195] E1017 16:25:32.808982   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:33.196] E1017 16:25:32.910527   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:33.196] E1017 16:25:33.036305   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:33.196] E1017 16:25:33.160292   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:33.811] E1017 16:25:33.810762   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:33.913] E1017 16:25:33.912592   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:34.038] E1017 16:25:34.037797   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:34.162] E1017 16:25:34.162183   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:34.813] E1017 16:25:34.812165   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:34.915] E1017 16:25:34.914975   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:35.040] E1017 16:25:35.040007   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:35.164] E1017 16:25:35.163842   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:35.814] E1017 16:25:35.813743   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:35.917] E1017 16:25:35.916615   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:36.042] E1017 16:25:36.041648   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:36.165] E1017 16:25:36.165186   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:36.201] I1017 16:25:36.200822   52942 namespace_controller.go:185] Namespace has been deleted test-secrets
W1017 16:25:36.816] E1017 16:25:36.815197   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:36.919] E1017 16:25:36.918232   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:37.044] E1017 16:25:37.043723   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:37.167] E1017 16:25:37.166636   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:37.817] E1017 16:25:37.816514   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:37.920] E1017 16:25:37.919674   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:38.045] E1017 16:25:38.045272   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:38.168] E1017 16:25:38.167402   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:38.268] +++ exit code: 0
I1017 16:25:38.269] Recording: run_client_config_tests
I1017 16:25:38.269] Running command: run_client_config_tests
I1017 16:25:38.286] 
I1017 16:25:38.289] +++ Running case: test-cmd.run_client_config_tests 
I1017 16:25:38.292] +++ working dir: /go/src/k8s.io/kubernetes
I1017 16:25:38.295] +++ command: run_client_config_tests
I1017 16:25:38.308] +++ [1017 16:25:38] Creating namespace namespace-1571329538-18832
I1017 16:25:38.389] namespace/namespace-1571329538-18832 created
I1017 16:25:38.469] Context "test" modified.
I1017 16:25:38.478] +++ [1017 16:25:38] Testing client config
I1017 16:25:38.555] Successful
I1017 16:25:38.555] message:error: stat missing: no such file or directory
I1017 16:25:38.555] has:missing: no such file or directory
I1017 16:25:38.643] Successful
I1017 16:25:38.643] message:error: stat missing: no such file or directory
I1017 16:25:38.644] has:missing: no such file or directory
I1017 16:25:38.725] Successful
I1017 16:25:38.725] message:error: stat missing: no such file or directory
I1017 16:25:38.726] has:missing: no such file or directory
I1017 16:25:38.800] Successful
I1017 16:25:38.800] message:Error in configuration: context was not found for specified context: missing-context
I1017 16:25:38.800] has:context was not found for specified context: missing-context
I1017 16:25:38.885] Successful
I1017 16:25:38.885] message:error: no server found for cluster "missing-cluster"
I1017 16:25:38.886] has:no server found for cluster "missing-cluster"
I1017 16:25:38.959] Successful
I1017 16:25:38.959] message:error: auth info "missing-user" does not exist
I1017 16:25:38.959] has:auth info "missing-user" does not exist
W1017 16:25:39.060] E1017 16:25:38.817907   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:39.060] E1017 16:25:38.921795   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:39.061] E1017 16:25:39.046995   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:39.161] Successful
I1017 16:25:39.162] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1017 16:25:39.162] has:error loading config file
I1017 16:25:39.190] Successful
I1017 16:25:39.190] message:error: stat missing-config: no such file or directory
I1017 16:25:39.190] has:no such file or directory
I1017 16:25:39.206] +++ exit code: 0
I1017 16:25:39.243] Recording: run_service_accounts_tests
I1017 16:25:39.243] Running command: run_service_accounts_tests
I1017 16:25:39.269] 
I1017 16:25:39.272] +++ Running case: test-cmd.run_service_accounts_tests 
I1017 16:25:39.275] +++ working dir: /go/src/k8s.io/kubernetes
I1017 16:25:39.279] +++ command: run_service_accounts_tests
I1017 16:25:39.292] +++ [1017 16:25:39] Creating namespace namespace-1571329539-31792
I1017 16:25:39.367] namespace/namespace-1571329539-31792 created
I1017 16:25:39.448] Context "test" modified.
I1017 16:25:39.457] +++ [1017 16:25:39] Testing service accounts
W1017 16:25:39.557] E1017 16:25:39.168891   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:39.658] core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
I1017 16:25:39.659] (Bnamespace/test-service-accounts created
I1017 16:25:39.767] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I1017 16:25:39.848] (Bserviceaccount/test-service-account created
W1017 16:25:39.949] E1017 16:25:39.821831   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:39.950] E1017 16:25:39.923467   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:40.049] E1017 16:25:40.048755   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:40.150] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I1017 16:25:40.150] (Bserviceaccount "test-service-account" deleted
I1017 16:25:40.160] namespace "test-service-accounts" deleted
W1017 16:25:40.261] E1017 16:25:40.170419   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:40.824] E1017 16:25:40.823798   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:40.925] E1017 16:25:40.924881   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:41.051] E1017 16:25:41.050364   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:41.172] E1017 16:25:41.172234   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:41.826] E1017 16:25:41.825645   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:41.927] E1017 16:25:41.926834   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:42.052] E1017 16:25:42.052057   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:42.175] E1017 16:25:42.174005   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:42.827] E1017 16:25:42.827005   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:42.929] E1017 16:25:42.928324   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:43.054] E1017 16:25:43.053807   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:43.176] E1017 16:25:43.175786   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:43.190] I1017 16:25:43.189861   52942 namespace_controller.go:185] Namespace has been deleted test-configmaps
W1017 16:25:43.829] E1017 16:25:43.828680   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:43.930] E1017 16:25:43.929915   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:44.056] E1017 16:25:44.055498   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:44.179] E1017 16:25:44.178666   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:44.830] E1017 16:25:44.830234   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:44.932] E1017 16:25:44.931661   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:45.057] E1017 16:25:45.057001   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:45.180] E1017 16:25:45.179939   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:45.286] +++ exit code: 0
I1017 16:25:45.325] Recording: run_job_tests
I1017 16:25:45.325] Running command: run_job_tests
I1017 16:25:45.352] 
I1017 16:25:45.356] +++ Running case: test-cmd.run_job_tests 
I1017 16:25:45.359] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 6 lines ...
I1017 16:25:45.724] (Bnamespace/test-jobs created
I1017 16:25:45.826] batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
I1017 16:25:45.915] (Bcronjob.batch/pi created
I1017 16:25:46.017] batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
I1017 16:25:46.095] (BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
I1017 16:25:46.096] pi     59 23 31 2 *   False     0        <none>          1s
W1017 16:25:46.197] E1017 16:25:45.831948   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:46.197] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 16:25:46.197] E1017 16:25:45.933286   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:46.198] E1017 16:25:46.058582   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:46.198] E1017 16:25:46.181455   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:46.298] Name:                          pi
I1017 16:25:46.299] Namespace:                     test-jobs
I1017 16:25:46.299] Labels:                        run=pi
I1017 16:25:46.299] Annotations:                   <none>
I1017 16:25:46.299] Schedule:                      59 23 31 2 *
I1017 16:25:46.300] Concurrency Policy:            Allow
I1017 16:25:46.300] Suspend:                       False
I1017 16:25:46.300] Successful Job History Limit:  3
I1017 16:25:46.300] Failed Job History Limit:      1
I1017 16:25:46.300] Starting Deadline Seconds:     <unset>
I1017 16:25:46.300] Selector:                      <unset>
I1017 16:25:46.300] Parallelism:                   <unset>
I1017 16:25:46.300] Completions:                   <unset>
I1017 16:25:46.301] Pod Template:
I1017 16:25:46.301]   Labels:  run=pi
... skipping 22 lines ...
I1017 16:25:46.433] batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
I1017 16:25:46.533] (Bjob.batch/test-job created
W1017 16:25:46.634] I1017 16:25:46.531564   52942 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"96f47f55-11a3-42b1-be87-7c00d0980ffa", APIVersion:"batch/v1", ResourceVersion:"1397", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-d82fm
I1017 16:25:46.735] batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
I1017 16:25:46.754] (BNAME       COMPLETIONS   DURATION   AGE
I1017 16:25:46.754] test-job   0/1           0s         0s
W1017 16:25:46.855] E1017 16:25:46.833544   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:46.935] E1017 16:25:46.934508   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:47.036] Name:           test-job
I1017 16:25:47.036] Namespace:      test-jobs
I1017 16:25:47.036] Selector:       controller-uid=96f47f55-11a3-42b1-be87-7c00d0980ffa
I1017 16:25:47.036] Labels:         controller-uid=96f47f55-11a3-42b1-be87-7c00d0980ffa
I1017 16:25:47.036]                 job-name=test-job
I1017 16:25:47.037]                 run=pi
I1017 16:25:47.037] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1017 16:25:47.037] Controlled By:  CronJob/pi
I1017 16:25:47.037] Parallelism:    1
I1017 16:25:47.037] Completions:    1
I1017 16:25:47.037] Start Time:     Thu, 17 Oct 2019 16:25:46 +0000
I1017 16:25:47.037] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1017 16:25:47.037] Pod Template:
I1017 16:25:47.038]   Labels:  controller-uid=96f47f55-11a3-42b1-be87-7c00d0980ffa
I1017 16:25:47.038]            job-name=test-job
I1017 16:25:47.038]            run=pi
I1017 16:25:47.038]   Containers:
I1017 16:25:47.038]    pi:
... skipping 15 lines ...
I1017 16:25:47.040]   Type    Reason            Age   From            Message
I1017 16:25:47.040]   ----    ------            ----  ----            -------
I1017 16:25:47.040]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-d82fm
I1017 16:25:47.040] job.batch "test-job" deleted
I1017 16:25:47.047] cronjob.batch "pi" deleted
I1017 16:25:47.140] namespace "test-jobs" deleted
W1017 16:25:47.241] E1017 16:25:47.060323   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:47.241] E1017 16:25:47.183289   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:47.835] E1017 16:25:47.835236   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:47.937] E1017 16:25:47.936393   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:48.062] E1017 16:25:48.061798   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:48.185] E1017 16:25:48.184816   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:48.837] E1017 16:25:48.837243   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:48.938] E1017 16:25:48.938087   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:49.064] E1017 16:25:49.063979   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:49.187] E1017 16:25:49.186332   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:49.840] E1017 16:25:49.839210   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:49.940] E1017 16:25:49.939682   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:50.066] E1017 16:25:50.065586   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:50.188] E1017 16:25:50.188158   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:50.259] I1017 16:25:50.259051   52942 namespace_controller.go:185] Namespace has been deleted test-service-accounts
W1017 16:25:50.841] E1017 16:25:50.840807   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:50.942] E1017 16:25:50.941706   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:51.067] E1017 16:25:51.067173   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:51.190] E1017 16:25:51.190012   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:51.843] E1017 16:25:51.842828   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:51.944] E1017 16:25:51.943931   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:52.069] E1017 16:25:52.069082   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:52.192] E1017 16:25:52.191275   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:52.293] +++ exit code: 0
I1017 16:25:52.294] Recording: run_create_job_tests
I1017 16:25:52.294] Running command: run_create_job_tests
I1017 16:25:52.320] 
I1017 16:25:52.323] +++ Running case: test-cmd.run_create_job_tests 
I1017 16:25:52.326] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 26 lines ...
I1017 16:25:53.730] Context "test" modified.
I1017 16:25:53.739] +++ [1017 16:25:53] Testing pod templates
I1017 16:25:53.833] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:25:53.990] (Bpodtemplate/nginx created
W1017 16:25:54.091] I1017 16:25:52.574113   52942 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571329552-23754", Name:"test-job", UID:"52b6153c-3304-43e8-b234-c4bd030d3322", APIVersion:"batch/v1", ResourceVersion:"1416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-84ntk
W1017 16:25:54.091] I1017 16:25:52.841944   52942 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571329552-23754", Name:"test-job-pi", UID:"dfa3904c-81a5-4468-9ba2-abcab0d770be", APIVersion:"batch/v1", ResourceVersion:"1423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-ztgmx
W1017 16:25:54.092] E1017 16:25:52.844560   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:54.092] E1017 16:25:52.945342   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:54.093] E1017 16:25:53.071771   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:54.093] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 16:25:54.094] E1017 16:25:53.193451   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:54.094] I1017 16:25:53.210071   52942 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571329552-23754", Name:"my-pi", UID:"96377bf4-dcf6-4848-92d0-017fe7ce396d", APIVersion:"batch/v1", ResourceVersion:"1431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-24vrv
W1017 16:25:54.095] E1017 16:25:53.846071   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:54.095] E1017 16:25:53.947052   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:54.095] I1017 16:25:53.987643   49408 controller.go:606] quota admission added evaluator for: podtemplates
W1017 16:25:54.096] E1017 16:25:54.073185   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:54.195] E1017 16:25:54.194950   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:54.296] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 16:25:54.297] (BNAME    CONTAINERS   IMAGES   POD LABELS
I1017 16:25:54.297] nginx   nginx        nginx    name=nginx
I1017 16:25:54.371] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 16:25:54.456] (Bpodtemplate "nginx" deleted
I1017 16:25:54.560] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 5 lines ...
I1017 16:25:54.642] +++ working dir: /go/src/k8s.io/kubernetes
I1017 16:25:54.645] +++ command: run_service_tests
I1017 16:25:54.734] Context "test" modified.
I1017 16:25:54.743] +++ [1017 16:25:54] Testing kubectl(v1:services)
I1017 16:25:54.852] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 16:25:55.013] (Bservice/redis-master created
W1017 16:25:55.114] E1017 16:25:54.847408   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:55.115] E1017 16:25:54.948851   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:55.115] E1017 16:25:55.075161   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:55.197] E1017 16:25:55.196328   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:55.297] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 16:25:55.298] (B
I1017 16:25:55.298] core.sh:864: FAIL!
I1017 16:25:55.298] Describe services redis-master
I1017 16:25:55.298]   Expected Match: Name:
I1017 16:25:55.298]   Not found in:
I1017 16:25:55.298] Name:              redis-master
I1017 16:25:55.298] Namespace:         default
I1017 16:25:55.298] Labels:            app=redis
... skipping 56 lines ...
I1017 16:25:55.557] TargetPort:        6379/TCP
I1017 16:25:55.557] Endpoints:         <none>
I1017 16:25:55.557] Session Affinity:  None
I1017 16:25:55.557] Events:            <none>
I1017 16:25:55.557] (B
I1017 16:25:55.663] 
I1017 16:25:55.663] FAIL!
I1017 16:25:55.663] Describe services
I1017 16:25:55.663]   Expected Match: Name:
I1017 16:25:55.663]   Not found in:
I1017 16:25:55.663] Name:              kubernetes
I1017 16:25:55.664] Namespace:         default
I1017 16:25:55.664] Labels:            component=apiserver
... skipping 157 lines ...
I1017 16:25:56.265]   type: ClusterIP
I1017 16:25:56.266] status:
I1017 16:25:56.266]   loadBalancer: {}
I1017 16:25:56.362] service/redis-master selector updated
I1017 16:25:56.464] core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
I1017 16:25:56.565] (Bservice/redis-master selector updated
W1017 16:25:56.666] E1017 16:25:55.848602   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:56.667] E1017 16:25:55.950833   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:56.667] E1017 16:25:56.076634   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:56.667] E1017 16:25:56.198943   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:56.768] core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1017 16:25:56.768] (BapiVersion: v1
I1017 16:25:56.769] kind: Service
I1017 16:25:56.769] metadata:
I1017 16:25:56.769]   creationTimestamp: "2019-10-17T16:25:55Z"
I1017 16:25:56.769]   labels:
... skipping 14 lines ...
I1017 16:25:56.771]   selector:
I1017 16:25:56.771]     role: padawan
I1017 16:25:56.771]   sessionAffinity: None
I1017 16:25:56.771]   type: ClusterIP
I1017 16:25:56.771] status:
I1017 16:25:56.771]   loadBalancer: {}
W1017 16:25:56.872] error: you must specify resources by --filename when --local is set.
W1017 16:25:56.872] Example resource specifications include:
W1017 16:25:56.873]    '-f rsrc.yaml'
W1017 16:25:56.873]    '--filename=rsrc.json'
W1017 16:25:56.873] E1017 16:25:56.849887   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:56.952] E1017 16:25:56.952186   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:57.053] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1017 16:25:57.159] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 16:25:57.258] (Bservice "redis-master" deleted
W1017 16:25:57.359] E1017 16:25:57.078162   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:57.360] E1017 16:25:57.200461   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:57.360] I1017 16:25:57.228705   52942 namespace_controller.go:185] Namespace has been deleted test-jobs
I1017 16:25:57.462] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 16:25:57.484] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 16:25:57.666] (Bservice/redis-master created
I1017 16:25:57.780] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 16:25:57.887] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 16:25:58.070] (Bservice/service-v1-test created
W1017 16:25:58.171] E1017 16:25:57.851187   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:58.172] E1017 16:25:57.953744   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:58.172] E1017 16:25:58.079630   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:58.202] E1017 16:25:58.201981   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:58.303] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I1017 16:25:58.377] (Bservice/service-v1-test replaced
I1017 16:25:58.493] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I1017 16:25:58.586] (Bservice "redis-master" deleted
I1017 16:25:58.691] service "service-v1-test" deleted
I1017 16:25:58.803] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 16:25:58.905] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 16:25:59.072] (Bservice/redis-master created
W1017 16:25:59.173] E1017 16:25:58.852623   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:59.174] E1017 16:25:58.955500   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:59.174] E1017 16:25:59.080905   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:25:59.204] E1017 16:25:59.203577   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:25:59.305] service/redis-slave created
I1017 16:25:59.380] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I1017 16:25:59.478] (BSuccessful
I1017 16:25:59.479] message:NAME           RSRC
I1017 16:25:59.479] kubernetes     145
I1017 16:25:59.479] redis-master   1467
... skipping 2 lines ...
I1017 16:25:59.579] core.sh:979: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I1017 16:25:59.672] (Bservice "redis-master" deleted
I1017 16:25:59.681] service "redis-slave" deleted
I1017 16:25:59.799] core.sh:986: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 16:25:59.900] (Bcore.sh:990: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 16:25:59.978] (Bservice/beep-boop created
W1017 16:26:00.083] E1017 16:25:59.854177   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:00.084] E1017 16:25:59.957824   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:00.085] E1017 16:26:00.084828   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:00.185] core.sh:994: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I1017 16:26:00.198] (Bcore.sh:998: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I1017 16:26:00.292] (Bservice "beep-boop" deleted
I1017 16:26:00.398] core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 16:26:00.503] (Bcore.sh:1009: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:00.623] (Bservice/testmetadata created
I1017 16:26:00.623] deployment.apps/testmetadata created
W1017 16:26:00.723] E1017 16:26:00.205284   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:00.724] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 16:26:00.724] I1017 16:26:00.598868   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"8e82db6f-067a-4dfd-adbc-2742a33ea5ae", APIVersion:"apps/v1", ResourceVersion:"1484", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W1017 16:26:00.724] I1017 16:26:00.608143   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"3b8a07ce-1d3c-4f59-abed-b7d3546fca94", APIVersion:"apps/v1", ResourceVersion:"1485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-8qthw
W1017 16:26:00.725] I1017 16:26:00.618012   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"3b8a07ce-1d3c-4f59-abed-b7d3546fca94", APIVersion:"apps/v1", ResourceVersion:"1485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-fh9zb
I1017 16:26:00.825] core.sh:1013: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
I1017 16:26:00.833] (Bcore.sh:1014: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
I1017 16:26:00.940] (Bservice/exposemetadata exposed
W1017 16:26:01.041] E1017 16:26:00.855995   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:01.041] E1017 16:26:00.959260   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:01.087] E1017 16:26:01.086272   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:01.187] core.sh:1020: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
I1017 16:26:01.188] (Bservice "exposemetadata" deleted
I1017 16:26:01.189] service "testmetadata" deleted
I1017 16:26:01.228] deployment.apps "testmetadata" deleted
I1017 16:26:01.252] +++ exit code: 0
I1017 16:26:01.288] Recording: run_daemonset_tests
... skipping 5 lines ...
I1017 16:26:01.337] +++ [1017 16:26:01] Creating namespace namespace-1571329561-8157
I1017 16:26:01.414] namespace/namespace-1571329561-8157 created
I1017 16:26:01.491] Context "test" modified.
I1017 16:26:01.500] +++ [1017 16:26:01] Testing kubectl(v1:daemonsets)
I1017 16:26:01.596] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:01.763] (Bdaemonset.apps/bind created
W1017 16:26:01.864] E1017 16:26:01.207681   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:01.864] I1017 16:26:01.759936   49408 controller.go:606] quota admission added evaluator for: daemonsets.apps
W1017 16:26:01.864] I1017 16:26:01.770582   49408 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W1017 16:26:01.865] E1017 16:26:01.857161   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:01.961] E1017 16:26:01.960597   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:02.062] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I1017 16:26:02.062] (Bdaemonset.apps/bind configured
I1017 16:26:02.145] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I1017 16:26:02.236] (Bdaemonset.apps/bind image updated
I1017 16:26:02.336] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I1017 16:26:02.425] (Bdaemonset.apps/bind env updated
... skipping 13 lines ...
I1017 16:26:03.093] +++ [1017 16:26:03] Creating namespace namespace-1571329563-31786
I1017 16:26:03.168] namespace/namespace-1571329563-31786 created
I1017 16:26:03.241] Context "test" modified.
I1017 16:26:03.249] +++ [1017 16:26:03] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I1017 16:26:03.342] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:03.507] (Bdaemonset.apps/bind created
W1017 16:26:03.608] E1017 16:26:02.087783   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:03.609] E1017 16:26:02.209308   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:03.609] E1017 16:26:02.858784   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:03.609] E1017 16:26:02.961917   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:03.610] E1017 16:26:03.089105   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:03.610] E1017 16:26:03.210784   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:03.712] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1571329563-31786"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I1017 16:26:03.712]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I1017 16:26:03.717] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I1017 16:26:03.819] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1017 16:26:03.910] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1017 16:26:04.067] (Bdaemonset.apps/bind configured
... skipping 11 lines ...
I1017 16:26:04.560]     Port:	<none>
I1017 16:26:04.560]     Host Port:	<none>
I1017 16:26:04.560]     Environment:	<none>
I1017 16:26:04.560]     Mounts:	<none>
I1017 16:26:04.560]   Volumes:	<none>
I1017 16:26:04.560]  (dry run)
W1017 16:26:04.661] E1017 16:26:03.860364   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:04.661] E1017 16:26:03.963623   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:04.662] E1017 16:26:04.090933   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:04.662] E1017 16:26:04.212344   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:04.762] apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1017 16:26:04.786] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:04.887] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1017 16:26:04.990] (Bdaemonset.apps/bind rolled back
W1017 16:26:05.091] E1017 16:26:04.862061   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:05.091] E1017 16:26:04.964843   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:05.097] E1017 16:26:05.002149   52942 daemon_controller.go:302] namespace-1571329563-31786/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1571329563-31786", SelfLink:"/apis/apps/v1/namespaces/namespace-1571329563-31786/daemonsets/bind", UID:"b362039d-54ad-4611-8d43-32ba19f8902a", ResourceVersion:"1549", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63706926363, loc:(*time.Location)(0x7763040)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1571329563-31786\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001d37100), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0025d7da8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002240000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001d37120), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000984908)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0025d7dfc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1017 16:26:05.098] E1017 16:26:05.092223   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:05.198] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1017 16:26:05.199] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1017 16:26:05.305] (BSuccessful
I1017 16:26:05.306] message:error: unable to find specified revision 1000000 in history
I1017 16:26:05.306] has:unable to find specified revision
I1017 16:26:05.401] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1017 16:26:05.497] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1017 16:26:05.601] (Bdaemonset.apps/bind rolled back
I1017 16:26:05.704] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1017 16:26:05.802] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 9 lines ...
I1017 16:26:06.086] +++ [1017 16:26:06] Creating namespace namespace-1571329566-26216
I1017 16:26:06.163] namespace/namespace-1571329566-26216 created
I1017 16:26:06.245] Context "test" modified.
I1017 16:26:06.258] +++ [1017 16:26:06] Testing kubectl(v1:replicationcontrollers)
I1017 16:26:06.358] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:06.539] (Breplicationcontroller/frontend created
W1017 16:26:06.639] E1017 16:26:05.214205   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:06.640] E1017 16:26:05.863765   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:06.640] E1017 16:26:05.966342   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:06.641] E1017 16:26:06.093780   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:06.641] E1017 16:26:06.215815   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:06.642] I1017 16:26:06.544849   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"a490d706-b0bc-4536-bafe-9babe96262f0", APIVersion:"v1", ResourceVersion:"1561", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bk2sc
W1017 16:26:06.642] I1017 16:26:06.548046   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"a490d706-b0bc-4536-bafe-9babe96262f0", APIVersion:"v1", ResourceVersion:"1561", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r8t9x
W1017 16:26:06.643] I1017 16:26:06.549903   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"a490d706-b0bc-4536-bafe-9babe96262f0", APIVersion:"v1", ResourceVersion:"1561", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-grtsw
I1017 16:26:06.743] replicationcontroller "frontend" deleted
I1017 16:26:06.754] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:06.863] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:07.065] (Breplicationcontroller/frontend created
W1017 16:26:07.166] E1017 16:26:06.865051   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:07.166] E1017 16:26:06.967780   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:07.167] I1017 16:26:07.068627   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"dd4ac6c5-f6a1-4fe3-adaa-36aed96c037b", APIVersion:"v1", ResourceVersion:"1577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qf26q
W1017 16:26:07.167] I1017 16:26:07.072665   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"dd4ac6c5-f6a1-4fe3-adaa-36aed96c037b", APIVersion:"v1", ResourceVersion:"1577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wzbgs
W1017 16:26:07.168] I1017 16:26:07.073096   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"dd4ac6c5-f6a1-4fe3-adaa-36aed96c037b", APIVersion:"v1", ResourceVersion:"1577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hwt8q
W1017 16:26:07.168] E1017 16:26:07.095366   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:07.217] E1017 16:26:07.217194   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:07.322] core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 16:26:07.322] (B
I1017 16:26:07.323] core.sh:1061: FAIL!
I1017 16:26:07.323] Describe rc frontend
I1017 16:26:07.323]   Expected Match: Name:
I1017 16:26:07.323]   Not found in:
I1017 16:26:07.323] Name:         frontend
I1017 16:26:07.323] Namespace:    namespace-1571329566-26216
I1017 16:26:07.323] Selector:     app=guestbook,tier=frontend
I1017 16:26:07.323] Labels:       app=guestbook
I1017 16:26:07.323]               tier=frontend
I1017 16:26:07.324] Annotations:  <none>
I1017 16:26:07.324] Replicas:     3 current / 3 desired
I1017 16:26:07.324] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 16:26:07.324] Pod Template:
I1017 16:26:07.324]   Labels:  app=guestbook
I1017 16:26:07.324]            tier=frontend
I1017 16:26:07.324]   Containers:
I1017 16:26:07.324]    php-redis:
I1017 16:26:07.324]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 20 lines ...
I1017 16:26:07.437] Namespace:    namespace-1571329566-26216
I1017 16:26:07.437] Selector:     app=guestbook,tier=frontend
I1017 16:26:07.437] Labels:       app=guestbook
I1017 16:26:07.437]               tier=frontend
I1017 16:26:07.437] Annotations:  <none>
I1017 16:26:07.437] Replicas:     3 current / 3 desired
I1017 16:26:07.437] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 16:26:07.438] Pod Template:
I1017 16:26:07.438]   Labels:  app=guestbook
I1017 16:26:07.438]            tier=frontend
I1017 16:26:07.438]   Containers:
I1017 16:26:07.438]    php-redis:
I1017 16:26:07.438]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1017 16:26:07.551] Namespace:    namespace-1571329566-26216
I1017 16:26:07.551] Selector:     app=guestbook,tier=frontend
I1017 16:26:07.551] Labels:       app=guestbook
I1017 16:26:07.551]               tier=frontend
I1017 16:26:07.552] Annotations:  <none>
I1017 16:26:07.552] Replicas:     3 current / 3 desired
I1017 16:26:07.552] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 16:26:07.552] Pod Template:
I1017 16:26:07.552]   Labels:  app=guestbook
I1017 16:26:07.553]            tier=frontend
I1017 16:26:07.553]   Containers:
I1017 16:26:07.553]    php-redis:
I1017 16:26:07.553]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1017 16:26:07.667] Namespace:    namespace-1571329566-26216
I1017 16:26:07.667] Selector:     app=guestbook,tier=frontend
I1017 16:26:07.667] Labels:       app=guestbook
I1017 16:26:07.667]               tier=frontend
I1017 16:26:07.667] Annotations:  <none>
I1017 16:26:07.667] Replicas:     3 current / 3 desired
I1017 16:26:07.667] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 16:26:07.668] Pod Template:
I1017 16:26:07.668]   Labels:  app=guestbook
I1017 16:26:07.668]            tier=frontend
I1017 16:26:07.668]   Containers:
I1017 16:26:07.668]    php-redis:
I1017 16:26:07.668]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1017 16:26:07.669]   ----    ------            ----  ----                    -------
I1017 16:26:07.670]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-qf26q
I1017 16:26:07.670]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-wzbgs
I1017 16:26:07.670]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-hwt8q
I1017 16:26:07.670] (B
I1017 16:26:07.796] 
I1017 16:26:07.797] FAIL!
I1017 16:26:07.797] Describe rc
I1017 16:26:07.797]   Expected Match: Name:
I1017 16:26:07.798]   Not found in:
I1017 16:26:07.798] Name:         frontend
I1017 16:26:07.798] Namespace:    namespace-1571329566-26216
I1017 16:26:07.798] Selector:     app=guestbook,tier=frontend
I1017 16:26:07.798] Labels:       app=guestbook
I1017 16:26:07.798]               tier=frontend
I1017 16:26:07.798] Annotations:  <none>
I1017 16:26:07.798] Replicas:     3 current / 3 desired
I1017 16:26:07.799] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 16:26:07.799] Pod Template:
I1017 16:26:07.799]   Labels:  app=guestbook
I1017 16:26:07.799]            tier=frontend
I1017 16:26:07.799]   Containers:
I1017 16:26:07.799]    php-redis:
I1017 16:26:07.799]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1017 16:26:07.800]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-qf26q
I1017 16:26:07.800]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-wzbgs
I1017 16:26:07.801]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-hwt8q
I1017 16:26:07.801] (B
I1017 16:26:07.801] 1069 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1017 16:26:07.801] (B
W1017 16:26:07.902] E1017 16:26:07.868431   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:07.970] E1017 16:26:07.969422   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:08.070] Successful describe
I1017 16:26:08.071] Name:         frontend
I1017 16:26:08.071] Namespace:    namespace-1571329566-26216
I1017 16:26:08.071] Selector:     app=guestbook,tier=frontend
I1017 16:26:08.071] Labels:       app=guestbook
I1017 16:26:08.071]               tier=frontend
I1017 16:26:08.071] Annotations:  <none>
I1017 16:26:08.071] Replicas:     3 current / 3 desired
I1017 16:26:08.072] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 16:26:08.072] Pod Template:
I1017 16:26:08.072]   Labels:  app=guestbook
I1017 16:26:08.072]            tier=frontend
I1017 16:26:08.072]   Containers:
I1017 16:26:08.072]    php-redis:
I1017 16:26:08.072]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1017 16:26:08.074] Namespace:    namespace-1571329566-26216
I1017 16:26:08.074] Selector:     app=guestbook,tier=frontend
I1017 16:26:08.074] Labels:       app=guestbook
I1017 16:26:08.074]               tier=frontend
I1017 16:26:08.074] Annotations:  <none>
I1017 16:26:08.074] Replicas:     3 current / 3 desired
I1017 16:26:08.074] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 16:26:08.074] Pod Template:
I1017 16:26:08.075]   Labels:  app=guestbook
I1017 16:26:08.075]            tier=frontend
I1017 16:26:08.075]   Containers:
I1017 16:26:08.075]    php-redis:
I1017 16:26:08.075]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1017 16:26:08.137] Namespace:    namespace-1571329566-26216
I1017 16:26:08.138] Selector:     app=guestbook,tier=frontend
I1017 16:26:08.138] Labels:       app=guestbook
I1017 16:26:08.138]               tier=frontend
I1017 16:26:08.138] Annotations:  <none>
I1017 16:26:08.138] Replicas:     3 current / 3 desired
I1017 16:26:08.138] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 16:26:08.138] Pod Template:
I1017 16:26:08.139]   Labels:  app=guestbook
I1017 16:26:08.139]            tier=frontend
I1017 16:26:08.139]   Containers:
I1017 16:26:08.139]    php-redis:
I1017 16:26:08.139]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1017 16:26:08.141]   ----    ------            ----  ----                    -------
I1017 16:26:08.141]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-qf26q
I1017 16:26:08.141]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-wzbgs
I1017 16:26:08.141]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-hwt8q
I1017 16:26:08.242] (Bcore.sh:1079: Successful get rc frontend {{.spec.replicas}}: 3
I1017 16:26:08.337] (Breplicationcontroller/frontend scaled
W1017 16:26:08.438] E1017 16:26:08.096885   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:08.438] E1017 16:26:08.218635   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:08.439] I1017 16:26:08.342245   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"dd4ac6c5-f6a1-4fe3-adaa-36aed96c037b", APIVersion:"v1", ResourceVersion:"1586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-qf26q
I1017 16:26:08.539] core.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I1017 16:26:08.544] (Bcore.sh:1087: Successful get rc frontend {{.spec.replicas}}: 2
I1017 16:26:08.736] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
I1017 16:26:08.838] (Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I1017 16:26:08.924] (Breplicationcontroller/frontend scaled
W1017 16:26:09.024] error: Expected replicas to be 3, was 2
W1017 16:26:09.025] E1017 16:26:08.869930   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:09.025] I1017 16:26:08.927685   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"dd4ac6c5-f6a1-4fe3-adaa-36aed96c037b", APIVersion:"v1", ResourceVersion:"1593", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7rv88
W1017 16:26:09.025] E1017 16:26:08.970734   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:09.099] E1017 16:26:09.098679   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:09.200] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I1017 16:26:09.204] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I1017 16:26:09.224] (Breplicationcontroller/frontend scaled
I1017 16:26:09.329] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I1017 16:26:09.417] (Breplicationcontroller "frontend" deleted
W1017 16:26:09.518] E1017 16:26:09.220694   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:09.519] I1017 16:26:09.227776   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"dd4ac6c5-f6a1-4fe3-adaa-36aed96c037b", APIVersion:"v1", ResourceVersion:"1598", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7rv88
W1017 16:26:09.604] I1017 16:26:09.603766   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-master", UID:"8e18db59-53f4-42c7-94a0-8351f9a71269", APIVersion:"v1", ResourceVersion:"1609", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-8gxh2
I1017 16:26:09.705] replicationcontroller/redis-master created
I1017 16:26:09.788] replicationcontroller/redis-slave created
W1017 16:26:09.889] I1017 16:26:09.792285   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-slave", UID:"c4ace05b-eace-4cb9-b24b-a09848ec1dba", APIVersion:"v1", ResourceVersion:"1615", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-5ww5j
W1017 16:26:09.890] I1017 16:26:09.800362   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-slave", UID:"c4ace05b-eace-4cb9-b24b-a09848ec1dba", APIVersion:"v1", ResourceVersion:"1615", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-5wdmj
W1017 16:26:09.890] E1017 16:26:09.871525   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:09.899] I1017 16:26:09.898377   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-master", UID:"8e18db59-53f4-42c7-94a0-8351f9a71269", APIVersion:"v1", ResourceVersion:"1622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-hj5gw
W1017 16:26:09.902] I1017 16:26:09.901831   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-master", UID:"8e18db59-53f4-42c7-94a0-8351f9a71269", APIVersion:"v1", ResourceVersion:"1622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-dqxnj
W1017 16:26:09.903] I1017 16:26:09.903000   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-master", UID:"8e18db59-53f4-42c7-94a0-8351f9a71269", APIVersion:"v1", ResourceVersion:"1622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-5mmzj
W1017 16:26:09.909] I1017 16:26:09.909247   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-slave", UID:"c4ace05b-eace-4cb9-b24b-a09848ec1dba", APIVersion:"v1", ResourceVersion:"1627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-7khww
W1017 16:26:09.912] I1017 16:26:09.912002   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-slave", UID:"c4ace05b-eace-4cb9-b24b-a09848ec1dba", APIVersion:"v1", ResourceVersion:"1627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-7tc92
W1017 16:26:09.972] E1017 16:26:09.972255   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:10.073] replicationcontroller/redis-master scaled
I1017 16:26:10.073] replicationcontroller/redis-slave scaled
I1017 16:26:10.074] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
I1017 16:26:10.132] (Bcore.sh:1118: Successful get rc redis-slave {{.spec.replicas}}: 4
I1017 16:26:10.228] (Breplicationcontroller "redis-master" deleted
I1017 16:26:10.234] replicationcontroller "redis-slave" deleted
W1017 16:26:10.335] E1017 16:26:10.100470   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:10.336] E1017 16:26:10.222058   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:10.434] I1017 16:26:10.432484   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment", UID:"10a1bb63-9fea-40a6-baa9-b8cab201006f", APIVersion:"apps/v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 16:26:10.436] I1017 16:26:10.436167   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-6986c7bc94", UID:"8bc16fbb-4863-45df-8adf-cd189b3d2668", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-86ff8
W1017 16:26:10.439] I1017 16:26:10.439098   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-6986c7bc94", UID:"8bc16fbb-4863-45df-8adf-cd189b3d2668", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-cl48p
W1017 16:26:10.441] I1017 16:26:10.440420   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-6986c7bc94", UID:"8bc16fbb-4863-45df-8adf-cd189b3d2668", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-64cff
I1017 16:26:10.541] deployment.apps/nginx-deployment created
I1017 16:26:10.550] deployment.apps/nginx-deployment scaled
W1017 16:26:10.651] I1017 16:26:10.554411   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment", UID:"10a1bb63-9fea-40a6-baa9-b8cab201006f", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W1017 16:26:10.652] I1017 16:26:10.562044   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-6986c7bc94", UID:"8bc16fbb-4863-45df-8adf-cd189b3d2668", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-cl48p
W1017 16:26:10.652] I1017 16:26:10.562812   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-6986c7bc94", UID:"8bc16fbb-4863-45df-8adf-cd189b3d2668", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-86ff8
I1017 16:26:10.754] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I1017 16:26:10.761] (Bdeployment.apps "nginx-deployment" deleted
W1017 16:26:10.873] E1017 16:26:10.872693   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:10.974] E1017 16:26:10.973336   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:11.074] Successful
I1017 16:26:11.075] message:service/expose-test-deployment exposed
I1017 16:26:11.075] has:service/expose-test-deployment exposed
I1017 16:26:11.075] service "expose-test-deployment" deleted
I1017 16:26:11.075] Successful
I1017 16:26:11.075] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1017 16:26:11.075] See 'kubectl expose -h' for help and examples
I1017 16:26:11.076] has:invalid deployment: no selectors
W1017 16:26:11.176] E1017 16:26:11.102004   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:11.229] I1017 16:26:11.225364   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment", UID:"ed92a120-41a5-4920-bae6-1b01fe5c3f28", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 16:26:11.229] E1017 16:26:11.225498   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:11.230] I1017 16:26:11.229061   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-6986c7bc94", UID:"9ff1a2c9-fb95-4600-9cca-91f831b15495", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-vggvn
W1017 16:26:11.233] I1017 16:26:11.232876   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-6986c7bc94", UID:"9ff1a2c9-fb95-4600-9cca-91f831b15495", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-nvxm4
W1017 16:26:11.234] I1017 16:26:11.232914   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-6986c7bc94", UID:"9ff1a2c9-fb95-4600-9cca-91f831b15495", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-6pc9w
I1017 16:26:11.335] deployment.apps/nginx-deployment created
I1017 16:26:11.335] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I1017 16:26:11.420] (Bservice/nginx-deployment exposed
... skipping 7 lines ...
I1017 16:26:12.156] (Bservice/frontend-2 exposed
I1017 16:26:12.254] core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
I1017 16:26:12.429] (Bpod/valid-pod created
W1017 16:26:12.530] I1017 16:26:11.775223   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"e6ab5747-d194-4f4d-b0e1-79e0d9134088", APIVersion:"v1", ResourceVersion:"1723", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qkh2z
W1017 16:26:12.530] I1017 16:26:11.778504   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"e6ab5747-d194-4f4d-b0e1-79e0d9134088", APIVersion:"v1", ResourceVersion:"1723", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p2bw5
W1017 16:26:12.531] I1017 16:26:11.778980   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"e6ab5747-d194-4f4d-b0e1-79e0d9134088", APIVersion:"v1", ResourceVersion:"1723", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j9hvs
W1017 16:26:12.532] E1017 16:26:11.874217   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:12.532] E1017 16:26:11.974380   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:12.533] E1017 16:26:12.103666   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:12.533] E1017 16:26:12.227437   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:12.634] service/frontend-3 exposed
I1017 16:26:12.634] core.sh:1170: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
I1017 16:26:12.722] (Bservice/frontend-4 exposed
I1017 16:26:12.826] core.sh:1174: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
I1017 16:26:12.914] (Bservice/frontend-5 exposed
I1017 16:26:13.019] core.sh:1178: Successful get service frontend-5 {{(index .spec.ports 0).port}}: 80
I1017 16:26:13.098] (Bpod "valid-pod" deleted
I1017 16:26:13.189] service "frontend" deleted
I1017 16:26:13.198] service "frontend-2" deleted
I1017 16:26:13.206] service "frontend-3" deleted
I1017 16:26:13.214] service "frontend-4" deleted
I1017 16:26:13.220] service "frontend-5" deleted
W1017 16:26:13.321] E1017 16:26:12.875844   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:13.321] E1017 16:26:12.975871   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:13.321] E1017 16:26:13.104957   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:13.322] E1017 16:26:13.228433   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:13.422] Successful
I1017 16:26:13.423] message:error: cannot expose a Node
I1017 16:26:13.423] has:cannot expose
I1017 16:26:13.423] Successful
I1017 16:26:13.423] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1017 16:26:13.423] has:metadata.name: Invalid value
I1017 16:26:13.520] Successful
I1017 16:26:13.521] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 7 lines ...
I1017 16:26:14.010] (Bservice "etcd-server" deleted
I1017 16:26:14.116] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 16:26:14.195] (Breplicationcontroller "frontend" deleted
I1017 16:26:14.301] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:14.394] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:14.555] (Breplicationcontroller/frontend created
W1017 16:26:14.656] E1017 16:26:13.877305   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:14.657] E1017 16:26:13.977271   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:14.657] E1017 16:26:14.106133   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:14.658] E1017 16:26:14.229954   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:14.658] I1017 16:26:14.559648   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"4cf927d9-edf1-4feb-aac4-27f69728c6c4", APIVersion:"v1", ResourceVersion:"1786", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rv76t
W1017 16:26:14.659] I1017 16:26:14.561940   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"4cf927d9-edf1-4feb-aac4-27f69728c6c4", APIVersion:"v1", ResourceVersion:"1786", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9vq5p
W1017 16:26:14.659] I1017 16:26:14.563398   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"4cf927d9-edf1-4feb-aac4-27f69728c6c4", APIVersion:"v1", ResourceVersion:"1786", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-89bqw
W1017 16:26:14.733] I1017 16:26:14.732817   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-slave", UID:"9e09747a-4e72-43f0-8af3-1a49791c2237", APIVersion:"v1", ResourceVersion:"1795", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-82xzm
W1017 16:26:14.736] I1017 16:26:14.736313   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"redis-slave", UID:"9e09747a-4e72-43f0-8af3-1a49791c2237", APIVersion:"v1", ResourceVersion:"1795", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-q7rws
I1017 16:26:14.837] replicationcontroller/redis-slave created
I1017 16:26:14.838] core.sh:1228: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I1017 16:26:14.938] (Bcore.sh:1232: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I1017 16:26:15.019] (Breplicationcontroller "frontend" deleted
I1017 16:26:15.028] replicationcontroller "redis-slave" deleted
I1017 16:26:15.132] core.sh:1236: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:15.228] (Bcore.sh:1240: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:15.385] (Breplicationcontroller/frontend created
W1017 16:26:15.486] E1017 16:26:14.879011   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:15.487] E1017 16:26:14.978805   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:15.487] E1017 16:26:15.107838   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:15.487] E1017 16:26:15.231992   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:15.488] I1017 16:26:15.389672   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"78470a96-e8ff-4b50-83e8-35e29f93f48f", APIVersion:"v1", ResourceVersion:"1814", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-47pmt
W1017 16:26:15.488] I1017 16:26:15.392896   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"78470a96-e8ff-4b50-83e8-35e29f93f48f", APIVersion:"v1", ResourceVersion:"1814", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-29525
W1017 16:26:15.489] I1017 16:26:15.393277   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571329566-26216", Name:"frontend", UID:"78470a96-e8ff-4b50-83e8-35e29f93f48f", APIVersion:"v1", ResourceVersion:"1814", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-srl4m
I1017 16:26:15.589] core.sh:1243: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 16:26:15.590] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I1017 16:26:15.679] core.sh:1246: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I1017 16:26:15.760] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I1017 16:26:15.847] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1017 16:26:15.946] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1017 16:26:16.025] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1017 16:26:16.126] E1017 16:26:15.880703   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:16.126] E1017 16:26:15.980835   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:16.127] Error: required flag(s) "max" not set
W1017 16:26:16.127] 
W1017 16:26:16.127] 
W1017 16:26:16.127] Examples:
W1017 16:26:16.127]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1017 16:26:16.127]   kubectl autoscale deployment foo --min=2 --max=10
W1017 16:26:16.127]   
... skipping 18 lines ...
W1017 16:26:16.131] 
W1017 16:26:16.132] Usage:
W1017 16:26:16.132]   kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [options]
W1017 16:26:16.132] 
W1017 16:26:16.132] Use "kubectl options" for a list of global command-line options (applies to all commands).
W1017 16:26:16.132] 
W1017 16:26:16.132] E1017 16:26:16.109157   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:16.233] replicationcontroller "frontend" deleted
I1017 16:26:16.310] core.sh:1259: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:16.399] (BapiVersion: apps/v1
I1017 16:26:16.399] kind: Deployment
I1017 16:26:16.400] metadata:
I1017 16:26:16.400]   creationTimestamp: null
... skipping 24 lines ...
I1017 16:26:16.403]           limits:
I1017 16:26:16.403]             cpu: 300m
I1017 16:26:16.403]           requests:
I1017 16:26:16.403]             cpu: 300m
I1017 16:26:16.403]       terminationGracePeriodSeconds: 0
I1017 16:26:16.403] status: {}
W1017 16:26:16.505] E1017 16:26:16.233465   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:16.511] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I1017 16:26:16.690] deployment.apps/nginx-deployment-resources created
W1017 16:26:16.791] I1017 16:26:16.692926   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources", UID:"eb66dded-29bc-4eea-b883-7c5aa47f6ac7", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
W1017 16:26:16.792] I1017 16:26:16.696278   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources-67f8cfff5", UID:"1cf49612-a348-498f-a9e4-b2d1352a1641", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-n4q6l
W1017 16:26:16.792] I1017 16:26:16.699514   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources-67f8cfff5", UID:"1cf49612-a348-498f-a9e4-b2d1352a1641", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-gwsjt
W1017 16:26:16.793] I1017 16:26:16.700132   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources-67f8cfff5", UID:"1cf49612-a348-498f-a9e4-b2d1352a1641", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-rx4zh
W1017 16:26:16.883] E1017 16:26:16.882342   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:16.983] E1017 16:26:16.982456   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:17.083] core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1017 16:26:17.084] (Bcore.sh:1266: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:17.084] (Bcore.sh:1267: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 16:26:17.150] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W1017 16:26:17.251] E1017 16:26:17.110769   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:17.251] I1017 16:26:17.155061   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources", UID:"eb66dded-29bc-4eea-b883-7c5aa47f6ac7", APIVersion:"apps/v1", ResourceVersion:"1849", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
W1017 16:26:17.252] I1017 16:26:17.158378   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources-55c547f795", UID:"113d0090-1192-4e88-b7aa-b19440c2244f", APIVersion:"apps/v1", ResourceVersion:"1850", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-p4t62
W1017 16:26:17.252] E1017 16:26:17.235349   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:17.353] core.sh:1270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I1017 16:26:17.360] (Bcore.sh:1271: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I1017 16:26:17.577] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W1017 16:26:17.677] error: unable to find container named redis
W1017 16:26:17.678] I1017 16:26:17.587498   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources", UID:"eb66dded-29bc-4eea-b883-7c5aa47f6ac7", APIVersion:"apps/v1", ResourceVersion:"1859", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 2
W1017 16:26:17.679] I1017 16:26:17.596248   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources", UID:"eb66dded-29bc-4eea-b883-7c5aa47f6ac7", APIVersion:"apps/v1", ResourceVersion:"1861", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
W1017 16:26:17.679] I1017 16:26:17.596924   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources-67f8cfff5", UID:"1cf49612-a348-498f-a9e4-b2d1352a1641", APIVersion:"apps/v1", ResourceVersion:"1863", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-n4q6l
W1017 16:26:17.680] I1017 16:26:17.603655   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources-6d86564b45", UID:"1f060f4e-a90a-48a8-b1d4-37c11015233f", APIVersion:"apps/v1", ResourceVersion:"1867", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-68q9g
I1017 16:26:17.780] core.sh:1276: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1017 16:26:17.804] (Bcore.sh:1277: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I1017 16:26:17.903] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W1017 16:26:18.003] E1017 16:26:17.884016   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:18.004] I1017 16:26:17.916572   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources", UID:"eb66dded-29bc-4eea-b883-7c5aa47f6ac7", APIVersion:"apps/v1", ResourceVersion:"1880", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 1
W1017 16:26:18.005] I1017 16:26:17.922156   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources-67f8cfff5", UID:"1cf49612-a348-498f-a9e4-b2d1352a1641", APIVersion:"apps/v1", ResourceVersion:"1884", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-gwsjt
W1017 16:26:18.005] I1017 16:26:17.925115   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources", UID:"eb66dded-29bc-4eea-b883-7c5aa47f6ac7", APIVersion:"apps/v1", ResourceVersion:"1883", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c478d4fdb to 1
W1017 16:26:18.006] I1017 16:26:17.929079   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329566-26216", Name:"nginx-deployment-resources-6c478d4fdb", UID:"ba9d2a09-6eaf-411c-b690-da5b0cd92ff5", APIVersion:"apps/v1", ResourceVersion:"1888", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c478d4fdb-4b5qx
W1017 16:26:18.006] E1017 16:26:17.984846   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:18.107] core.sh:1280: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1017 16:26:18.118] (Bcore.sh:1281: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1017 16:26:18.226] (Bcore.sh:1282: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
I1017 16:26:18.324] (BapiVersion: apps/v1
I1017 16:26:18.325] kind: Deployment
I1017 16:26:18.325] metadata:
... skipping 68 lines ...
I1017 16:26:18.333]     status: "True"
I1017 16:26:18.333]     type: Progressing
I1017 16:26:18.334]   observedGeneration: 4
I1017 16:26:18.334]   replicas: 4
I1017 16:26:18.334]   unavailableReplicas: 4
I1017 16:26:18.334]   updatedReplicas: 1
W1017 16:26:18.435] E1017 16:26:18.112392   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:18.435] E1017 16:26:18.237549   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:18.435] error: you must specify resources by --filename when --local is set.
W1017 16:26:18.435] Example resource specifications include:
W1017 16:26:18.436]    '-f rsrc.yaml'
W1017 16:26:18.436]    '--filename=rsrc.json'
I1017 16:26:18.536] core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1017 16:26:18.618] (Bcore.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1017 16:26:18.719] (Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 7 lines ...
I1017 16:26:18.927] +++ command: run_deployment_tests
I1017 16:26:18.940] +++ [1017 16:26:18] Creating namespace namespace-1571329578-32711
I1017 16:26:19.015] namespace/namespace-1571329578-32711 created
I1017 16:26:19.098] Context "test" modified.
I1017 16:26:19.106] +++ [1017 16:26:19] Testing deployments
I1017 16:26:19.192] deployment.apps/test-nginx-extensions created
W1017 16:26:19.292] E1017 16:26:18.885703   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:19.293] E1017 16:26:18.986497   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:19.293] E1017 16:26:19.113827   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:19.294] I1017 16:26:19.195728   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"test-nginx-extensions", UID:"713e77e0-db1e-402b-8918-cdfdc8a6bce7", APIVersion:"apps/v1", ResourceVersion:"1917", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5559c76db7 to 1
W1017 16:26:19.294] I1017 16:26:19.200867   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"test-nginx-extensions-5559c76db7", UID:"65a7d9fc-be88-45cd-bd34-2f42a74f96fa", APIVersion:"apps/v1", ResourceVersion:"1918", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5559c76db7-rr5lt
W1017 16:26:19.294] E1017 16:26:19.239113   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:19.395] apps.sh:185: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx
I1017 16:26:19.396] (BSuccessful
I1017 16:26:19.396] message:10
I1017 16:26:19.396] has not:2
I1017 16:26:19.489] Successful
I1017 16:26:19.489] message:apps/v1
... skipping 6 lines ...
I1017 16:26:19.895] (BSuccessful
I1017 16:26:19.896] message:10
I1017 16:26:19.896] has:10
I1017 16:26:19.990] Successful
I1017 16:26:19.990] message:apps/v1
I1017 16:26:19.990] has:apps/v1
W1017 16:26:20.091] E1017 16:26:19.887359   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:20.091] E1017 16:26:19.988259   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:20.116] E1017 16:26:20.115613   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:20.217] 
I1017 16:26:20.217] FAIL!
I1017 16:26:20.217] Describe rs
I1017 16:26:20.217]   Expected Match: Name:
I1017 16:26:20.217]   Not found in:
I1017 16:26:20.217] Name:           test-nginx-apps-79b9bd9585
I1017 16:26:20.217] Namespace:      namespace-1571329578-32711
I1017 16:26:20.217] Selector:       app=test-nginx-apps,pod-template-hash=79b9bd9585
I1017 16:26:20.217] Labels:         app=test-nginx-apps
I1017 16:26:20.217]                 pod-template-hash=79b9bd9585
I1017 16:26:20.218] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1017 16:26:20.218]                 deployment.kubernetes.io/max-replicas: 2
I1017 16:26:20.218]                 deployment.kubernetes.io/revision: 1
I1017 16:26:20.218] Controlled By:  Deployment/test-nginx-apps
I1017 16:26:20.218] Replicas:       1 current / 1 desired
I1017 16:26:20.218] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1017 16:26:20.218] Pod Template:
I1017 16:26:20.219]   Labels:  app=test-nginx-apps
I1017 16:26:20.219]            pod-template-hash=79b9bd9585
I1017 16:26:20.219]   Containers:
I1017 16:26:20.219]    nginx:
I1017 16:26:20.219]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 7 lines ...
I1017 16:26:20.220]   ----    ------            ----  ----                   -------
I1017 16:26:20.220]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: test-nginx-apps-79b9bd9585-9c2cc
I1017 16:26:20.220] (B
I1017 16:26:20.221] 206 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/apps.sh
I1017 16:26:20.221] (B
I1017 16:26:20.221] 
I1017 16:26:20.221] FAIL!
I1017 16:26:20.221] Describe pods
I1017 16:26:20.221]   Expected Match: Name:
I1017 16:26:20.221]   Not found in:
I1017 16:26:20.221] Name:           test-nginx-apps-79b9bd9585-9c2cc
I1017 16:26:20.221] Namespace:      namespace-1571329578-32711
I1017 16:26:20.222] Priority:       0
... skipping 18 lines ...
I1017 16:26:20.224] Tolerations:      <none>
I1017 16:26:20.224] Events:           <none>
I1017 16:26:20.224] (B
I1017 16:26:20.224] 208 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/apps.sh
I1017 16:26:20.225] (B
I1017 16:26:20.298] deployment.apps "test-nginx-apps" deleted
W1017 16:26:20.398] E1017 16:26:20.240871   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:20.499] apps.sh:214: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:20.504] (Bdeployment.apps/nginx-with-command created
W1017 16:26:20.605] I1017 16:26:20.507376   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-with-command", UID:"1baf239e-ef38-4b42-a31b-53a4159c3f44", APIVersion:"apps/v1", ResourceVersion:"1947", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-757c6f58dd to 1
W1017 16:26:20.605] I1017 16:26:20.511221   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-with-command-757c6f58dd", UID:"fad788a5-db00-4fb1-ab5d-3364ee1b7798", APIVersion:"apps/v1", ResourceVersion:"1948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-757c6f58dd-qkxdq
I1017 16:26:20.706] apps.sh:218: Successful get deploy nginx-with-command {{(index .spec.template.spec.containers 0).name}}: nginx
I1017 16:26:20.706] (Bdeployment.apps "nginx-with-command" deleted
I1017 16:26:20.822] apps.sh:224: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:20.990] (Bdeployment.apps/deployment-with-unixuserid created
W1017 16:26:21.091] E1017 16:26:20.889334   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:21.092] E1017 16:26:20.989856   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:21.092] I1017 16:26:20.994121   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"deployment-with-unixuserid", UID:"5be17102-7b7e-4cf3-8619-4c2f371d7826", APIVersion:"apps/v1", ResourceVersion:"1961", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-8fcdfc94f to 1
W1017 16:26:21.093] I1017 16:26:20.997224   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"deployment-with-unixuserid-8fcdfc94f", UID:"5e2550f1-ed20-4932-b966-6d1b8aeaa8bb", APIVersion:"apps/v1", ResourceVersion:"1962", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-8fcdfc94f-xqsgp
W1017 16:26:21.117] E1017 16:26:21.117149   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:21.218] apps.sh:228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
I1017 16:26:21.218] (Bdeployment.apps "deployment-with-unixuserid" deleted
I1017 16:26:21.293] apps.sh:235: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:21.461] (Bdeployment.apps/nginx-deployment created
W1017 16:26:21.562] E1017 16:26:21.242651   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:21.563] I1017 16:26:21.464830   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"3b67e753-d538-49f4-8765-124b13154348", APIVersion:"apps/v1", ResourceVersion:"1975", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 16:26:21.563] I1017 16:26:21.469189   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-6986c7bc94", UID:"0a7303b7-9cf6-4901-ba0f-b639a3d500d6", APIVersion:"apps/v1", ResourceVersion:"1976", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-z6prc
W1017 16:26:21.564] I1017 16:26:21.472317   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-6986c7bc94", UID:"0a7303b7-9cf6-4901-ba0f-b639a3d500d6", APIVersion:"apps/v1", ResourceVersion:"1976", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-898w6
W1017 16:26:21.564] I1017 16:26:21.472555   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-6986c7bc94", UID:"0a7303b7-9cf6-4901-ba0f-b639a3d500d6", APIVersion:"apps/v1", ResourceVersion:"1976", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-2p6rt
I1017 16:26:21.665] apps.sh:239: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3
I1017 16:26:21.665] (Bdeployment.apps "nginx-deployment" deleted
I1017 16:26:21.763] apps.sh:242: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:21.854] (Bapps.sh:246: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:21.951] (Bapps.sh:247: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:22.031] (Bdeployment.apps/nginx-deployment created
I1017 16:26:22.134] apps.sh:251: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I1017 16:26:22.213] (Bdeployment.apps "nginx-deployment" deleted
W1017 16:26:22.314] E1017 16:26:21.890767   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:22.314] E1017 16:26:21.991272   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:22.315] I1017 16:26:22.035001   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"5e39afba-19a4-4020-be01-b05566900aee", APIVersion:"apps/v1", ResourceVersion:"1997", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7f6fc565b9 to 1
W1017 16:26:22.315] I1017 16:26:22.038438   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-7f6fc565b9", UID:"122ecbe7-765b-45b1-bc13-ff83c278d146", APIVersion:"apps/v1", ResourceVersion:"1998", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7f6fc565b9-8n56d
W1017 16:26:22.315] E1017 16:26:22.118801   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:22.316] E1017 16:26:22.244181   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:22.416] apps.sh:256: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:22.434] (Bapps.sh:257: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I1017 16:26:22.614] (Breplicaset.apps "nginx-deployment-7f6fc565b9" deleted
I1017 16:26:22.717] apps.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:22.876] (Bdeployment.apps/nginx-deployment created
W1017 16:26:22.977] I1017 16:26:22.879655   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"20117974-7fb8-4694-b9cc-5a9cc28454fc", APIVersion:"apps/v1", ResourceVersion:"2016", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 16:26:22.978] I1017 16:26:22.882958   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-6986c7bc94", UID:"916a72c6-66e1-4ae1-8e68-4fd4582d42d1", APIVersion:"apps/v1", ResourceVersion:"2017", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-xzrgk
W1017 16:26:22.979] I1017 16:26:22.885720   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-6986c7bc94", UID:"916a72c6-66e1-4ae1-8e68-4fd4582d42d1", APIVersion:"apps/v1", ResourceVersion:"2017", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-5cz84
W1017 16:26:22.980] I1017 16:26:22.885999   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-6986c7bc94", UID:"916a72c6-66e1-4ae1-8e68-4fd4582d42d1", APIVersion:"apps/v1", ResourceVersion:"2017", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-pjrvk
W1017 16:26:22.980] E1017 16:26:22.892153   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:22.993] E1017 16:26:22.992871   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:23.094] apps.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
I1017 16:26:23.095] (Bhorizontalpodautoscaler.autoscaling/nginx-deployment autoscaled
I1017 16:26:23.175] apps.sh:271: Successful get hpa nginx-deployment {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1017 16:26:23.256] (Bhorizontalpodautoscaler.autoscaling "nginx-deployment" deleted
I1017 16:26:23.343] deployment.apps "nginx-deployment" deleted
I1017 16:26:23.445] apps.sh:279: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:23.603] (Bdeployment.apps/nginx created
W1017 16:26:23.704] E1017 16:26:23.120461   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:23.704] E1017 16:26:23.245619   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:23.705] I1017 16:26:23.607415   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx", UID:"f4b1a387-ae61-40e6-ba24-1042cc329bfc", APIVersion:"apps/v1", ResourceVersion:"2040", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1017 16:26:23.705] I1017 16:26:23.611001   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-f87d999f7", UID:"d5d4f4e1-3d41-4daf-a8c8-72c90b101e5e", APIVersion:"apps/v1", ResourceVersion:"2041", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-4pk5b
W1017 16:26:23.706] I1017 16:26:23.614581   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-f87d999f7", UID:"d5d4f4e1-3d41-4daf-a8c8-72c90b101e5e", APIVersion:"apps/v1", ResourceVersion:"2041", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-dq7mj
W1017 16:26:23.706] I1017 16:26:23.615380   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-f87d999f7", UID:"d5d4f4e1-3d41-4daf-a8c8-72c90b101e5e", APIVersion:"apps/v1", ResourceVersion:"2041", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-7j2jl
I1017 16:26:23.807] apps.sh:283: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 16:26:23.810] (Bapps.sh:284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:23.904] (Bdeployment.apps/nginx skipped rollback (current template already matches revision 1)
I1017 16:26:24.006] apps.sh:287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:24.170] (Bdeployment.apps/nginx configured
W1017 16:26:24.271] E1017 16:26:23.893898   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:24.271] E1017 16:26:23.994370   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:24.271] E1017 16:26:24.122306   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:24.271] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W1017 16:26:24.272] I1017 16:26:24.173730   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx", UID:"f4b1a387-ae61-40e6-ba24-1042cc329bfc", APIVersion:"apps/v1", ResourceVersion:"2054", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-78487f9fd7 to 1
W1017 16:26:24.272] I1017 16:26:24.177260   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-78487f9fd7", UID:"3ea7f4c3-a80a-47c3-9ef6-3e2c0965c623", APIVersion:"apps/v1", ResourceVersion:"2055", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-78487f9fd7-n2xf7
W1017 16:26:24.272] E1017 16:26:24.247529   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:24.373] apps.sh:290: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 16:26:24.386] (B    Image:	k8s.gcr.io/nginx:test-cmd
I1017 16:26:24.487] apps.sh:293: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 16:26:24.586] (Bdeployment.apps/nginx rolled back
W1017 16:26:24.896] E1017 16:26:24.895720   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:24.996] E1017 16:26:24.996237   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:25.124] E1017 16:26:25.123949   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:25.249] E1017 16:26:25.249061   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:25.690] apps.sh:297: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:25.897] (Bapps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:26.000] (Bdeployment.apps/nginx rolled back
W1017 16:26:26.101] error: unable to find specified revision 1000000 in history
W1017 16:26:26.101] E1017 16:26:25.897816   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:26.102] E1017 16:26:25.997752   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:26.126] E1017 16:26:26.125752   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:26.251] E1017 16:26:26.250908   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:26.900] E1017 16:26:26.899559   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:27.000] E1017 16:26:26.999271   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:27.114] apps.sh:304: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 16:26:27.219] (Bdeployment.apps/nginx paused
W1017 16:26:27.319] E1017 16:26:27.127098   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:27.320] E1017 16:26:27.252460   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:27.352] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W1017 16:26:27.467] error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
I1017 16:26:27.573] deployment.apps/nginx resumed
I1017 16:26:27.693] deployment.apps/nginx rolled back
W1017 16:26:27.902] E1017 16:26:27.901198   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:28.001] E1017 16:26:28.000495   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:28.102]     deployment.kubernetes.io/revision-history: 1,3
W1017 16:26:28.203] error: desired revision (3) is different from the running revision (5)
W1017 16:26:28.204] E1017 16:26:28.128363   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:28.216] I1017 16:26:28.215433   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx", UID:"f4b1a387-ae61-40e6-ba24-1042cc329bfc", APIVersion:"apps/v1", ResourceVersion:"2084", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-f87d999f7 to 2
W1017 16:26:28.222] I1017 16:26:28.221379   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-f87d999f7", UID:"d5d4f4e1-3d41-4daf-a8c8-72c90b101e5e", APIVersion:"apps/v1", ResourceVersion:"2088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-f87d999f7-4pk5b
W1017 16:26:28.225] I1017 16:26:28.224865   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx", UID:"f4b1a387-ae61-40e6-ba24-1042cc329bfc", APIVersion:"apps/v1", ResourceVersion:"2087", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-77888cf94b to 1
W1017 16:26:28.229] I1017 16:26:28.228516   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-77888cf94b", UID:"e6e5f75a-a3f7-4c68-b865-af90d22ee533", APIVersion:"apps/v1", ResourceVersion:"2094", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-77888cf94b-rtmvs
W1017 16:26:28.255] E1017 16:26:28.254366   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:28.358] deployment.apps/nginx restarted
W1017 16:26:28.903] E1017 16:26:28.902760   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:29.004] E1017 16:26:29.003747   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:29.130] E1017 16:26:29.130034   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:29.256] E1017 16:26:29.255786   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:29.428] Successful
I1017 16:26:29.428] message:apiVersion: apps/v1
I1017 16:26:29.429] kind: ReplicaSet
I1017 16:26:29.429] metadata:
I1017 16:26:29.429]   annotations:
I1017 16:26:29.429]     deployment.kubernetes.io/desired-replicas: "3"
... skipping 55 lines ...
W1017 16:26:29.713] I1017 16:26:29.616871   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx2", UID:"2ffc3955-2b90-40dd-9c5d-75b352570789", APIVersion:"apps/v1", ResourceVersion:"2105", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-57b7865cd9 to 3
W1017 16:26:29.714] I1017 16:26:29.621184   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx2-57b7865cd9", UID:"aba60b6c-f882-422b-abad-0bba0342c792", APIVersion:"apps/v1", ResourceVersion:"2106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-8fwgr
W1017 16:26:29.714] I1017 16:26:29.623458   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx2-57b7865cd9", UID:"aba60b6c-f882-422b-abad-0bba0342c792", APIVersion:"apps/v1", ResourceVersion:"2106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-jcrnx
W1017 16:26:29.715] I1017 16:26:29.626869   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx2-57b7865cd9", UID:"aba60b6c-f882-422b-abad-0bba0342c792", APIVersion:"apps/v1", ResourceVersion:"2106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-snmqt
I1017 16:26:29.815] deployment.apps "nginx2" deleted
I1017 16:26:29.827] deployment.apps "nginx" deleted
W1017 16:26:29.928] E1017 16:26:29.904303   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:30.005] E1017 16:26:30.005250   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:30.106] apps.sh:334: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:30.128] (Bdeployment.apps/nginx-deployment created
W1017 16:26:30.229] E1017 16:26:30.131143   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:30.230] I1017 16:26:30.132847   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"8be3b2ae-22cb-4a3c-984d-382a80941b2a", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
W1017 16:26:30.230] I1017 16:26:30.136835   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-598d4d68b4", UID:"7015f9dd-325f-455f-aba1-d9efcd102de1", APIVersion:"apps/v1", ResourceVersion:"2141", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-86ckz
W1017 16:26:30.231] I1017 16:26:30.140080   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-598d4d68b4", UID:"7015f9dd-325f-455f-aba1-d9efcd102de1", APIVersion:"apps/v1", ResourceVersion:"2141", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-snwbr
W1017 16:26:30.231] I1017 16:26:30.140520   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-598d4d68b4", UID:"7015f9dd-325f-455f-aba1-d9efcd102de1", APIVersion:"apps/v1", ResourceVersion:"2141", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-v7v7j
W1017 16:26:30.258] E1017 16:26:30.257433   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:30.358] apps.sh:337: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
I1017 16:26:30.359] (Bapps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:30.455] (Bapps.sh:339: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 16:26:30.558] (Bdeployment.apps/nginx-deployment image updated
W1017 16:26:30.659] I1017 16:26:30.562278   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"8be3b2ae-22cb-4a3c-984d-382a80941b2a", APIVersion:"apps/v1", ResourceVersion:"2155", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
W1017 16:26:30.660] I1017 16:26:30.566064   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-59df9b5f5b", UID:"dfab42b5-0222-4a5e-8ab1-6be81afad8fa", APIVersion:"apps/v1", ResourceVersion:"2156", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-s6fhk
W1017 16:26:30.661] I1017 16:26:30.577652   52942 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1571329566-26216
I1017 16:26:30.761] apps.sh:342: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 16:26:30.778] (Bapps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 16:26:30.976] (Bdeployment.apps/nginx-deployment image updated
W1017 16:26:31.077] error: unable to find container named "redis"
W1017 16:26:31.078] E1017 16:26:30.905907   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:31.078] E1017 16:26:31.007379   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:31.133] E1017 16:26:31.132806   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:31.234] apps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:31.235] (Bapps.sh:349: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 16:26:31.277] (Bdeployment.apps/nginx-deployment image updated
I1017 16:26:31.382] apps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 16:26:31.481] (Bapps.sh:353: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 16:26:31.667] (Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 16:26:31.767] (Bapps.sh:357: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 16:26:31.862] (Bdeployment.apps/nginx-deployment image updated
W1017 16:26:31.963] E1017 16:26:31.258731   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:31.964] I1017 16:26:31.872999   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"8be3b2ae-22cb-4a3c-984d-382a80941b2a", APIVersion:"apps/v1", ResourceVersion:"2173", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
W1017 16:26:31.965] I1017 16:26:31.880030   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-598d4d68b4", UID:"7015f9dd-325f-455f-aba1-d9efcd102de1", APIVersion:"apps/v1", ResourceVersion:"2177", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-v7v7j
W1017 16:26:31.965] I1017 16:26:31.880067   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"8be3b2ae-22cb-4a3c-984d-382a80941b2a", APIVersion:"apps/v1", ResourceVersion:"2176", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7d758dbc54 to 1
W1017 16:26:31.966] I1017 16:26:31.884167   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-7d758dbc54", UID:"65d82cf3-9d9b-4184-89e4-72545f981fc9", APIVersion:"apps/v1", ResourceVersion:"2181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7d758dbc54-9dbrr
W1017 16:26:31.966] E1017 16:26:31.908052   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:32.009] E1017 16:26:32.009059   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:32.110] apps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:32.111] (Bapps.sh:361: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:32.277] (Bapps.sh:364: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:32.375] (Bapps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 16:26:32.457] (Bdeployment.apps "nginx-deployment" deleted
W1017 16:26:32.557] E1017 16:26:32.134849   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:32.558] E1017 16:26:32.260314   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:32.658] apps.sh:371: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 16:26:32.735] (Bdeployment.apps/nginx-deployment created
W1017 16:26:32.837] I1017 16:26:32.739378   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"252edd75-023d-4f8a-a35b-18ccf0a89ba9", APIVersion:"apps/v1", ResourceVersion:"2206", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
W1017 16:26:32.838] I1017 16:26:32.743402   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-598d4d68b4", UID:"41941c0f-0c62-4484-862b-433970b28451", APIVersion:"apps/v1", ResourceVersion:"2207", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-55xfp
W1017 16:26:32.839] I1017 16:26:32.747099   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-598d4d68b4", UID:"41941c0f-0c62-4484-862b-433970b28451", APIVersion:"apps/v1", ResourceVersion:"2207", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-dnr6g
W1017 16:26:32.839] I1017 16:26:32.747456   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-598d4d68b4", UID:"41941c0f-0c62-4484-862b-433970b28451", APIVersion:"apps/v1", ResourceVersion:"2207", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-j9zbd
W1017 16:26:32.910] E1017 16:26:32.909527   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:33.010] configmap/test-set-env-config created
I1017 16:26:33.083] secret/test-set-env-secret created
W1017 16:26:33.184] E1017 16:26:33.010998   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:33.185] E1017 16:26:33.136843   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 16:26:33.262] E1017 16:26:33.261703   52942 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 16:26:33.363] apps.sh:376: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
I1017 16:26:33.363] (Bapps.sh:378: Successful get configmaps/test-set-env-config {{.metadata.name}}: test-set-env-config
I1017 16:26:33.389] (Bapps.sh:379: Successful get secret {{range.items}}{{.metadata.name}}:{{end}}: test-set-env-secret:
I1017 16:26:33.490] (Bdeployment.apps/nginx-deployment env updated
I1017 16:26:33.590] apps.sh:383: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
I1017 16:26:33.684] (Bapps.sh:385: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
I1017 16:26:33.785] (Bdeployment.apps/nginx-deployment env updated
W1017 16:26:33.886] I1017 16:26:33.494831   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"252edd75-023d-4f8a-a35b-18ccf0a89ba9", APIVersion:"apps/v1", ResourceVersion:"2222", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6b9f7756b4 to 1
W1017 16:26:33.887] I1017 16:26:33.497694   52942 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment-6b9f7756b4", UID:"42614202-3afd-4c44-bfe6-02878cf37c08", APIVersion:"apps/v1", ResourceVersion:"2223", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6b9f7756b4-c4hsb
W1017 16:26:33.887] I1017 16:26:33.796097   52942 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571329578-32711", Name:"nginx-deployment", UID:"25