This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2857 succeeded
Started2020-03-24 02:13
Elapsed23m45s
Revisionrelease-1.16
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/eeaef583-94fc-453e-8042-09f45fcc52ea/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/eeaef583-94fc-453e-8042-09f45fcc52ea/targets/test

Test Failures


k8s.io/kubernetes/test/integration/daemonset TestSimpleDaemonSetLaunchesPods/ScheduleDaemonSetPods_(true)/TestSimpleDaemonSetLaunchesPods/ScheduleDaemonSetPods_(true)_(&DaemonSetUpdateStrategy{Type:OnDelete,RollingUpdate:nil,}) 1m18s

go test -v k8s.io/kubernetes/test/integration/daemonset -run TestSimpleDaemonSetLaunchesPods/ScheduleDaemonSetPods_(true)/TestSimpleDaemonSetLaunchesPods/ScheduleDaemonSetPods_(true)_(&DaemonSetUpdateStrategy{Type:OnDelete,RollingUpdate:nil,})$
=== RUN   TestSimpleDaemonSetLaunchesPods/ScheduleDaemonSetPods_(true)/TestSimpleDaemonSetLaunchesPods/ScheduleDaemonSetPods_(true)_(&DaemonSetUpdateStrategy{Type:OnDelete,RollingUpdate:nil,})
I0324 02:29:25.202477  108224 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0324 02:29:25.202507  108224 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0324 02:29:25.202519  108224 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0324 02:29:25.202529  108224 master.go:259] Using reconciler: 
I0324 02:29:25.204338  108224 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.204509  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.204629  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.205369  108224 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0324 02:29:25.205403  108224 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.205456  108224 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0324 02:29:25.205665  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.205698  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.206232  108224 watch_cache.go:405] Replace watchCache (rev: 11630) 
I0324 02:29:25.206310  108224 store.go:1342] Monitoring events count at <storage-prefix>//events
I0324 02:29:25.206350  108224 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.206441  108224 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0324 02:29:25.206487  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.206504  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.207134  108224 watch_cache.go:405] Replace watchCache (rev: 11630) 
I0324 02:29:25.207835  108224 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0324 02:29:25.207879  108224 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.207977  108224 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0324 02:29:25.208014  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.208030  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.209666  108224 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0324 02:29:25.209698  108224 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0324 02:29:25.209868  108224 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.209983  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.210008  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.210418  108224 watch_cache.go:405] Replace watchCache (rev: 11630) 
I0324 02:29:25.210867  108224 watch_cache.go:405] Replace watchCache (rev: 11630) 
I0324 02:29:25.211370  108224 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0324 02:29:25.211402  108224 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0324 02:29:25.211566  108224 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.211706  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.211727  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.212331  108224 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0324 02:29:25.212407  108224 watch_cache.go:405] Replace watchCache (rev: 11630) 
I0324 02:29:25.212473  108224 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.212602  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.212624  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.212686  108224 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0324 02:29:25.213530  108224 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0324 02:29:25.213610  108224 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0324 02:29:25.213631  108224 watch_cache.go:405] Replace watchCache (rev: 11630) 
I0324 02:29:25.213710  108224 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.213820  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.214001  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.214636  108224 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0324 02:29:25.214801  108224 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.214816  108224 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0324 02:29:25.215176  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.215204  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.215505  108224 watch_cache.go:405] Replace watchCache (rev: 11630) 
I0324 02:29:25.216291  108224 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0324 02:29:25.216400  108224 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0324 02:29:25.216492  108224 watch_cache.go:405] Replace watchCache (rev: 11630) 
I0324 02:29:25.216619  108224 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.216729  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.216829  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.217521  108224 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0324 02:29:25.217703  108224 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.217813  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.217843  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.217926  108224 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0324 02:29:25.218240  108224 watch_cache.go:405] Replace watchCache (rev: 11631) 
I0324 02:29:25.219028  108224 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0324 02:29:25.219186  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.219318  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.219334  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.219434  108224 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0324 02:29:25.219797  108224 watch_cache.go:405] Replace watchCache (rev: 11631) 
I0324 02:29:25.220526  108224 watch_cache.go:405] Replace watchCache (rev: 11631) 
I0324 02:29:25.220832  108224 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0324 02:29:25.220889  108224 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0324 02:29:25.221020  108224 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.221187  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.221216  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.222113  108224 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0324 02:29:25.222263  108224 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.222584  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.222603  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.222715  108224 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0324 02:29:25.222862  108224 watch_cache.go:405] Replace watchCache (rev: 11631) 
I0324 02:29:25.223859  108224 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0324 02:29:25.223890  108224 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.224077  108224 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0324 02:29:25.224189  108224 watch_cache.go:405] Replace watchCache (rev: 11631) 
I0324 02:29:25.224424  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.224447  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.225294  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.225322  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.226045  108224 watch_cache.go:405] Replace watchCache (rev: 11632) 
I0324 02:29:25.226372  108224 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.226499  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.226538  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.227640  108224 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0324 02:29:25.227666  108224 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0324 02:29:25.227703  108224 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0324 02:29:25.228341  108224 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.228529  108224 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.229305  108224 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.230053  108224 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.231071  108224 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.231488  108224 watch_cache.go:405] Replace watchCache (rev: 11633) 
I0324 02:29:25.231895  108224 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.232321  108224 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.232623  108224 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.232885  108224 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.233827  108224 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.234621  108224 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.234843  108224 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.235861  108224 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.236142  108224 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.236720  108224 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.236994  108224 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.237882  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.238071  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.238215  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.238387  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.238566  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.238698  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.238870  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.239911  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.240184  108224 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.241042  108224 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.242082  108224 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.242369  108224 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.242644  108224 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.243413  108224 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.243883  108224 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.244740  108224 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.245662  108224 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.248574  108224 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.249499  108224 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.249841  108224 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.249955  108224 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0324 02:29:25.249978  108224 master.go:461] Enabling API group "authentication.k8s.io".
I0324 02:29:25.250002  108224 master.go:461] Enabling API group "authorization.k8s.io".
I0324 02:29:25.250389  108224 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.250541  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.250567  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.252488  108224 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0324 02:29:25.252634  108224 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0324 02:29:25.252689  108224 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.252834  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.252863  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.253499  108224 watch_cache.go:405] Replace watchCache (rev: 11636) 
I0324 02:29:25.283714  108224 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0324 02:29:25.283840  108224 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0324 02:29:25.283988  108224 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.284172  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.284226  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.284874  108224 watch_cache.go:405] Replace watchCache (rev: 11638) 
I0324 02:29:25.285460  108224 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0324 02:29:25.285481  108224 master.go:461] Enabling API group "autoscaling".
I0324 02:29:25.285592  108224 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0324 02:29:25.285686  108224 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.285866  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.285886  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.287498  108224 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0324 02:29:25.287652  108224 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0324 02:29:25.287915  108224 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.288270  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.288397  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.288899  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.289746  108224 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0324 02:29:25.289776  108224 master.go:461] Enabling API group "batch".
I0324 02:29:25.289809  108224 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0324 02:29:25.289940  108224 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.290268  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.290345  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.290682  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.291713  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.292370  108224 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0324 02:29:25.292406  108224 master.go:461] Enabling API group "certificates.k8s.io".
I0324 02:29:25.292599  108224 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0324 02:29:25.292594  108224 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.292709  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.292725  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.293509  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.294036  108224 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0324 02:29:25.294217  108224 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.294320  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.294336  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.294439  108224 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0324 02:29:25.295598  108224 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0324 02:29:25.295618  108224 master.go:461] Enabling API group "coordination.k8s.io".
I0324 02:29:25.295635  108224 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0324 02:29:25.295656  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.295831  108224 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0324 02:29:25.295800  108224 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.296192  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.296233  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.297013  108224 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0324 02:29:25.297054  108224 master.go:461] Enabling API group "extensions".
I0324 02:29:25.297207  108224 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.297396  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.297424  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.297521  108224 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0324 02:29:25.298462  108224 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0324 02:29:25.298500  108224 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0324 02:29:25.298698  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.299237  108224 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.299512  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.300690  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.300003  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.300085  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.302511  108224 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0324 02:29:25.302559  108224 master.go:461] Enabling API group "networking.k8s.io".
I0324 02:29:25.302612  108224 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.302955  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.302988  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.303044  108224 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0324 02:29:25.303611  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.303753  108224 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0324 02:29:25.303769  108224 master.go:461] Enabling API group "node.k8s.io".
I0324 02:29:25.303912  108224 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.303980  108224 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0324 02:29:25.304055  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.304071  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.305105  108224 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0324 02:29:25.305157  108224 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0324 02:29:25.305280  108224 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.305418  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.305444  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.306120  108224 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0324 02:29:25.306141  108224 master.go:461] Enabling API group "policy".
I0324 02:29:25.306171  108224 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.306261  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.306261  108224 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0324 02:29:25.306277  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.307257  108224 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0324 02:29:25.307531  108224 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0324 02:29:25.307663  108224 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.307798  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.307826  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.308393  108224 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0324 02:29:25.308431  108224 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.308541  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.308558  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.308681  108224 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0324 02:29:25.308810  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.308922  108224 watch_cache.go:405] Replace watchCache (rev: 11640) 
I0324 02:29:25.309497  108224 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0324 02:29:25.309654  108224 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.309787  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.309803  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.309877  108224 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0324 02:29:25.310471  108224 watch_cache.go:405] Replace watchCache (rev: 11641) 
I0324 02:29:25.310521  108224 watch_cache.go:405] Replace watchCache (rev: 11641) 
I0324 02:29:25.310985  108224 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0324 02:29:25.311429  108224 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.311027  108224 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0324 02:29:25.311889  108224 watch_cache.go:405] Replace watchCache (rev: 11641) 
I0324 02:29:25.312709  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.312940  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.313410  108224 watch_cache.go:405] Replace watchCache (rev: 11641) 
I0324 02:29:25.314172  108224 watch_cache.go:405] Replace watchCache (rev: 11642) 
I0324 02:29:25.314584  108224 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0324 02:29:25.314734  108224 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0324 02:29:25.315179  108224 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.315384  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.315413  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.315603  108224 watch_cache.go:405] Replace watchCache (rev: 11642) 
I0324 02:29:25.316429  108224 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0324 02:29:25.316463  108224 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.316597  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.316619  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.316735  108224 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0324 02:29:25.317631  108224 watch_cache.go:405] Replace watchCache (rev: 11642) 
I0324 02:29:25.318226  108224 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0324 02:29:25.318328  108224 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0324 02:29:25.318380  108224 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.318699  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.318721  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.319773  108224 watch_cache.go:405] Replace watchCache (rev: 11642) 
I0324 02:29:25.319888  108224 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0324 02:29:25.319923  108224 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0324 02:29:25.320003  108224 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0324 02:29:25.320982  108224 watch_cache.go:405] Replace watchCache (rev: 11642) 
I0324 02:29:25.322033  108224 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.322340  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.322366  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.323229  108224 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0324 02:29:25.323383  108224 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0324 02:29:25.323402  108224 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.323719  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.323747  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.324233  108224 watch_cache.go:405] Replace watchCache (rev: 11642) 
I0324 02:29:25.325255  108224 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0324 02:29:25.325316  108224 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0324 02:29:25.325319  108224 master.go:461] Enabling API group "scheduling.k8s.io".
I0324 02:29:25.325524  108224 master.go:450] Skipping disabled API group "settings.k8s.io".
I0324 02:29:25.325674  108224 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.325788  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.325803  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.326345  108224 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0324 02:29:25.326461  108224 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0324 02:29:25.326626  108224 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.326724  108224 watch_cache.go:405] Replace watchCache (rev: 11642) 
I0324 02:29:25.326808  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.326832  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.327772  108224 watch_cache.go:405] Replace watchCache (rev: 11642) 
I0324 02:29:25.327800  108224 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0324 02:29:25.327991  108224 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.328047  108224 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0324 02:29:25.328267  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.328291  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.328786  108224 watch_cache.go:405] Replace watchCache (rev: 11642) 
I0324 02:29:25.332248  108224 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0324 02:29:25.332327  108224 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.336538  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.336583  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.337535  108224 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0324 02:29:25.341374  108224 watch_cache.go:405] Replace watchCache (rev: 11643) 
I0324 02:29:25.344026  108224 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0324 02:29:25.344740  108224 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0324 02:29:25.346895  108224 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.348690  108224 watch_cache.go:405] Replace watchCache (rev: 11646) 
I0324 02:29:25.356572  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.356783  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.359884  108224 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0324 02:29:25.360179  108224 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0324 02:29:25.360332  108224 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.360593  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.360647  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.367617  108224 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0324 02:29:25.367663  108224 master.go:461] Enabling API group "storage.k8s.io".
I0324 02:29:25.367926  108224 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.368123  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.368123  108224 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0324 02:29:25.368158  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.368337  108224 watch_cache.go:405] Replace watchCache (rev: 11648) 
I0324 02:29:25.369299  108224 watch_cache.go:405] Replace watchCache (rev: 11648) 
I0324 02:29:25.372563  108224 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0324 02:29:25.372787  108224 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0324 02:29:25.372869  108224 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.373125  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.373149  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.375231  108224 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0324 02:29:25.375391  108224 watch_cache.go:405] Replace watchCache (rev: 11649) 
I0324 02:29:25.375508  108224 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0324 02:29:25.376017  108224 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.376370  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.376490  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.377755  108224 watch_cache.go:405] Replace watchCache (rev: 11650) 
I0324 02:29:25.378181  108224 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0324 02:29:25.378412  108224 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0324 02:29:25.378584  108224 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.378742  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.378851  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.379859  108224 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0324 02:29:25.379939  108224 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0324 02:29:25.380357  108224 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.380677  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.380793  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.382081  108224 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0324 02:29:25.382192  108224 master.go:461] Enabling API group "apps".
I0324 02:29:25.382255  108224 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.382395  108224 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0324 02:29:25.383231  108224 watch_cache.go:405] Replace watchCache (rev: 11651) 
I0324 02:29:25.388314  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.388351  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.388947  108224 watch_cache.go:405] Replace watchCache (rev: 11653) 
I0324 02:29:25.389316  108224 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0324 02:29:25.389363  108224 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0324 02:29:25.389365  108224 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.389488  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.389520  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.390673  108224 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0324 02:29:25.390754  108224 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0324 02:29:25.390848  108224 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.391637  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.391740  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.392725  108224 watch_cache.go:405] Replace watchCache (rev: 11654) 
I0324 02:29:25.392842  108224 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0324 02:29:25.392866  108224 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0324 02:29:25.393487  108224 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.392954  108224 watch_cache.go:405] Replace watchCache (rev: 11654) 
I0324 02:29:25.393201  108224 watch_cache.go:405] Replace watchCache (rev: 11654) 
I0324 02:29:25.394457  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.394483  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.395718  108224 watch_cache.go:405] Replace watchCache (rev: 11654) 
I0324 02:29:25.395803  108224 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0324 02:29:25.395834  108224 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0324 02:29:25.395866  108224 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.396025  108224 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0324 02:29:25.396683  108224 watch_cache.go:405] Replace watchCache (rev: 11654) 
I0324 02:29:25.396903  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:25.396928  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:25.397600  108224 store.go:1342] Monitoring events count at <storage-prefix>//events
I0324 02:29:25.397621  108224 master.go:461] Enabling API group "events.k8s.io".
I0324 02:29:25.397653  108224 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0324 02:29:25.397834  108224 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.398019  108224 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.398268  108224 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.398369  108224 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.398462  108224 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.398619  108224 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.398823  108224 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.398939  108224 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.399039  108224 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.399059  108224 watch_cache.go:405] Replace watchCache (rev: 11655) 
I0324 02:29:25.399131  108224 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.400022  108224 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.400305  108224 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.401088  108224 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.401341  108224 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.402092  108224 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.402339  108224 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.403676  108224 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.403925  108224 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.404605  108224 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.404862  108224 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0324 02:29:25.404929  108224 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
I0324 02:29:25.405563  108224 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.405715  108224 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.405970  108224 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.407007  108224 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.407900  108224 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.408694  108224 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.408946  108224 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.409727  108224 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.410433  108224 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.410689  108224 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.411364  108224 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0324 02:29:25.411438  108224 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0324 02:29:25.412355  108224 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.412715  108224 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.413309  108224 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.414043  108224 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.414664  108224 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.415432  108224 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.416134  108224 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.416719  108224 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.417357  108224 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.418126  108224 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.418735  108224 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0324 02:29:25.418928  108224 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0324 02:29:25.419682  108224 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.420346  108224 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0324 02:29:25.420414  108224 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0324 02:29:25.421225  108224 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.421743  108224 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.421976  108224 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.422483  108224 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.422971  108224 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.423496  108224 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.424198  108224 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0324 02:29:25.424265  108224 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0324 02:29:25.425037  108224 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.425701  108224 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.426070  108224 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.426825  108224 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.427128  108224 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.427380  108224 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.428035  108224 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.428293  108224 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.428542  108224 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.429261  108224 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.429516  108224 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.429786  108224 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0324 02:29:25.429841  108224 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0324 02:29:25.429849  108224 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0324 02:29:25.430488  108224 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.431300  108224 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.431954  108224 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.432524  108224 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.433273  108224 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"cec57396-c71c-4545-bdd3-5cea0ad9fd90", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0324 02:29:25.438496  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.438528  108224 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0324 02:29:25.438540  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.438552  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.438562  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.438570  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.438611  108224 httplog.go:90] GET /healthz: (273.551µs) 0 [Go-http-client/1.1 127.0.0.1:39884]
I0324 02:29:25.441013  108224 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.745566ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.444389  108224 httplog.go:90] GET /api/v1/services: (1.589466ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.448159  108224 httplog.go:90] GET /api/v1/services: (804.197µs) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.450364  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.450396  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.450409  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.450419  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.450428  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.450452  108224 httplog.go:90] GET /healthz: (169.42µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0324 02:29:25.452915  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.369965ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.455015  108224 httplog.go:90] GET /api/v1/services: (2.439153ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0324 02:29:25.455461  108224 httplog.go:90] POST /api/v1/namespaces: (2.17772ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.455518  108224 httplog.go:90] GET /api/v1/services: (2.695583ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39888]
I0324 02:29:25.456941  108224 httplog.go:90] GET /api/v1/namespaces/kube-public: (853.286µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.458863  108224 httplog.go:90] POST /api/v1/namespaces: (1.625303ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.460927  108224 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.607185ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.462761  108224 httplog.go:90] POST /api/v1/namespaces: (1.47496ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.540313  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.540354  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.540368  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.540378  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.540386  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.540430  108224 httplog.go:90] GET /healthz: (267.232µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:25.551278  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.551316  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.551330  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.551340  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.551350  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.551382  108224 httplog.go:90] GET /healthz: (264.473µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.640354  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.640392  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.640406  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.640419  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.640441  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.640477  108224 httplog.go:90] GET /healthz: (266.891µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:25.651542  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.651619  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.651634  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.651647  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.651657  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.651695  108224 httplog.go:90] GET /healthz: (331.893µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.743191  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.743224  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.743236  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.743246  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.743253  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.743287  108224 httplog.go:90] GET /healthz: (273.259µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:25.751222  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.751254  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.751266  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.751275  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.751282  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.751327  108224 httplog.go:90] GET /healthz: (222.83µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.840296  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.840335  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.840350  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.840360  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.840368  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.840400  108224 httplog.go:90] GET /healthz: (273.032µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:25.851194  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.851231  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.851245  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.851256  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.851263  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.851300  108224 httplog.go:90] GET /healthz: (248.268µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:25.940352  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.940397  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.940411  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.940421  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.940428  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.940458  108224 httplog.go:90] GET /healthz: (241.308µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:25.951204  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:25.951241  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:25.951254  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:25.951264  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:25.951271  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:25.951301  108224 httplog.go:90] GET /healthz: (234.006µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.040384  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:26.040422  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.040435  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:26.040446  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:26.040453  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:26.040509  108224 httplog.go:90] GET /healthz: (262.649µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:26.051284  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:26.051321  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.051334  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:26.051344  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:26.051352  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:26.051392  108224 httplog.go:90] GET /healthz: (246.081µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.142096  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:26.142127  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.142139  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:26.142148  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:26.142155  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:26.142184  108224 httplog.go:90] GET /healthz: (227.093µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:26.151173  108224 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0324 02:29:26.151201  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.151213  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:26.151222  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:26.151230  108224 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:26.151255  108224 httplog.go:90] GET /healthz: (211.693µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.202766  108224 client.go:357] parsed scheme: "endpoint"
I0324 02:29:26.202864  108224 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:29:26.241205  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.241240  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:26.241251  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:26.241259  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:26.241298  108224 httplog.go:90] GET /healthz: (1.117352ms) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:26.251968  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.252000  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:26.252012  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:26.252021  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:26.252054  108224 httplog.go:90] GET /healthz: (965.052µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.341178  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.341213  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:26.341225  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:26.341234  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:26.341273  108224 httplog.go:90] GET /healthz: (1.027161ms) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:26.352089  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.352123  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:26.352135  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:26.352143  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:26.352189  108224 httplog.go:90] GET /healthz: (1.06009ms) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.439284  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.389914ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.439409  108224 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.508573ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0324 02:29:26.439718  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383516ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.441336  108224 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.558836ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0324 02:29:26.441348  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.360268ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.442067  108224 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.262155ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.442093  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.442112  108224 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0324 02:29:26.442122  108224 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0324 02:29:26.442131  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0324 02:29:26.442158  108224 httplog.go:90] GET /healthz: (1.440204ms) 0 [Go-http-client/1.1 127.0.0.1:39912]
I0324 02:29:26.442300  108224 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0324 02:29:26.443143  108224 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.459277ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.444006  108224 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (948.232µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39912]
I0324 02:29:26.444166  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.491186ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39884]
I0324 02:29:26.445564  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.065883ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.446109  108224 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.654016ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.446275  108224 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0324 02:29:26.446289  108224 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0324 02:29:26.447142  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.138476ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.448270  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (775.911µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.449553  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (758.25µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.450706  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (874.468µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.451714  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.451736  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.451772  108224 httplog.go:90] GET /healthz: (704.586µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.451969  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (884.521µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.453126  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (802.169µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.454965  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.446989ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.455162  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0324 02:29:26.456087  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (764.079µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.458817  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.29497ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.459015  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0324 02:29:26.460290  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (940.35µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.462445  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.589441ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.462697  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0324 02:29:26.463713  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (783.015µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.465502  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.391075ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.465839  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0324 02:29:26.467284  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.22543ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.468843  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.230918ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.469256  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0324 02:29:26.470191  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (778.538µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.472033  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.510286ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.472229  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0324 02:29:26.473391  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (762.893µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.474987  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.242129ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.475380  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0324 02:29:26.476519  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (991.246µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.478689  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.695244ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.478921  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0324 02:29:26.479997  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (882.764µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.483110  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.628765ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.483439  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0324 02:29:26.484476  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (862.096µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.486678  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.80763ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.487055  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0324 02:29:26.488303  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (986.169µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.489932  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.157568ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.490127  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0324 02:29:26.491749  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.426553ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.494564  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.207327ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.495015  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0324 02:29:26.496049  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (828.524µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.497779  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.255193ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.498326  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0324 02:29:26.499528  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (944.225µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.501112  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.172404ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.501369  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0324 02:29:26.502557  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.023644ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.504377  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.208925ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.504630  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0324 02:29:26.505773  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (968.073µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.507856  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.493601ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.508088  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0324 02:29:26.509186  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (854.913µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.511132  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421391ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.511323  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0324 02:29:26.512543  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (966.588µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.514221  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.360414ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.514580  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0324 02:29:26.536085  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (8.445584ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.542899  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.771897ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.543405  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.545573  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.545896  108224 httplog.go:90] GET /healthz: (3.622196ms) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:26.543428  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0324 02:29:26.547237  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (882.882µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.549664  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.993125ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.549918  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0324 02:29:26.551284  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.172791ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.551693  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.551723  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.551751  108224 httplog.go:90] GET /healthz: (764.661µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.553877  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.103454ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.554187  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0324 02:29:26.555540  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.162641ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.557612  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.504926ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.557899  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0324 02:29:26.558973  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (795.984µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.560574  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.153349ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.560802  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0324 02:29:26.561948  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (816.85µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.564001  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.533371ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.564213  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0324 02:29:26.565390  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (927.922µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.567488  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.748665ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.567661  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0324 02:29:26.568493  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (688.959µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.571194  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.255702ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.571510  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0324 02:29:26.572563  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (752.315µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.575278  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.900429ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.575725  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0324 02:29:26.583256  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (7.184991ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.585294  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.571352ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.585556  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0324 02:29:26.586726  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (936.502µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.588319  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.153344ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.588613  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0324 02:29:26.589457  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (680.284µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.591643  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.828695ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.591942  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0324 02:29:26.592981  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (801.528µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.595598  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.717142ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.595986  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0324 02:29:26.597201  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (848.152µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.599448  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.751302ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.599651  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0324 02:29:26.600556  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (711.315µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.602549  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.413717ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.602841  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0324 02:29:26.603781  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (692.227µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.605235  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.043385ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.605515  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0324 02:29:26.607457  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.604176ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.609148  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.204569ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.609414  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0324 02:29:26.610393  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (788.646µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.612288  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.512391ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.612563  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0324 02:29:26.615193  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (2.453772ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.625471  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.247018ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.625945  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0324 02:29:26.627171  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.05493ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.628746  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.224019ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.629660  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0324 02:29:26.630923  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.068961ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.633174  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.902094ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.633363  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0324 02:29:26.634802  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.01558ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.638789  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.453111ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.639099  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0324 02:29:26.641971  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.642002  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.642032  108224 httplog.go:90] GET /healthz: (1.921391ms) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:26.642417  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (3.024469ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.645777  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.914512ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.646182  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0324 02:29:26.648208  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.851199ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.651133  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.551259ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.651369  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0324 02:29:26.651637  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.651663  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.651697  108224 httplog.go:90] GET /healthz: (756.506µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.652609  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (987.015µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.654380  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.406932ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.654597  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0324 02:29:26.655882  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.089509ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.657772  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.444764ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.657983  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0324 02:29:26.659524  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.360461ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.661352  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.285116ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.661557  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0324 02:29:26.662539  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (805.839µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.664284  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.343172ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.664474  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0324 02:29:26.665393  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (733.35µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.667101  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.242675ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.667343  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0324 02:29:26.668415  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (865.432µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.670768  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.926844ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.671031  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0324 02:29:26.672171  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (866.265µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.673823  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.275651ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.674059  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0324 02:29:26.675109  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (846.696µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.676871  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.403045ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.677049  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0324 02:29:26.678296  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.048553ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.680135  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.397993ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.680456  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0324 02:29:26.681412  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (771.234µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.699927  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.672684ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.700152  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0324 02:29:26.719419  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.123985ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.739901  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.635753ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.740254  108224 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0324 02:29:26.741169  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.741284  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.741392  108224 httplog.go:90] GET /healthz: (1.173435ms) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:26.751941  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.751968  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.752030  108224 httplog.go:90] GET /healthz: (924.424µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.759458  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.221703ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.780383  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.068714ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.780731  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0324 02:29:26.799539  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.246482ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.820307  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.016562ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.820565  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0324 02:29:26.839730  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.285232ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.848107  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.848133  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.848167  108224 httplog.go:90] GET /healthz: (928.726µs) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:26.851741  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.851765  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.851805  108224 httplog.go:90] GET /healthz: (794.238µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.860145  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.92786ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.860398  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0324 02:29:26.879429  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.133949ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.899981  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.723382ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.900244  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0324 02:29:26.919241  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (916.207µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.939899  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.681537ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:26.940120  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0324 02:29:26.941686  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.941710  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.941754  108224 httplog.go:90] GET /healthz: (1.648771ms) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:26.951831  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:26.951860  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:26.951901  108224 httplog.go:90] GET /healthz: (854.517µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.959251  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.023729ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.979850  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.562805ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:26.980094  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0324 02:29:26.999137  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (911.529µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.019695  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.466609ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.019968  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0324 02:29:27.039356  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.047456ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.040882  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.040912  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.040945  108224 httplog.go:90] GET /healthz: (800.666µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:27.053157  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.053185  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.053220  108224 httplog.go:90] GET /healthz: (2.144611ms) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.060014  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.800581ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.060267  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0324 02:29:27.079275  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.010805ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.099848  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.557577ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.100024  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0324 02:29:27.119561  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.2567ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.140246  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.986882ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.140448  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0324 02:29:27.142296  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.142317  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.142349  108224 httplog.go:90] GET /healthz: (1.958124ms) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:27.152608  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.152632  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.152664  108224 httplog.go:90] GET /healthz: (1.688662ms) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.159291  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.121208ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.179965  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.705579ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.180230  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0324 02:29:27.199291  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (983.063µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.219810  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.623305ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.220065  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0324 02:29:27.239294  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (981.074µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.240911  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.240939  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.240969  108224 httplog.go:90] GET /healthz: (869.682µs) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:27.251745  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.251769  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.251809  108224 httplog.go:90] GET /healthz: (762.189µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.260141  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.937772ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.260370  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0324 02:29:27.279532  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.071222ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.301816  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.531642ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.302134  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0324 02:29:27.319767  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.494207ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.340028  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.778621ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.340231  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0324 02:29:27.340895  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.340916  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.340945  108224 httplog.go:90] GET /healthz: (858.262µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:27.353542  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.353579  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.353625  108224 httplog.go:90] GET /healthz: (2.252961ms) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.361892  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (3.653635ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.380999  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.720893ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.381259  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0324 02:29:27.399573  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.298589ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.420325  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.960051ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.420768  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0324 02:29:27.439329  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.064535ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.440711  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.440739  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.440798  108224 httplog.go:90] GET /healthz: (721.774µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:27.451922  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.451953  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.451991  108224 httplog.go:90] GET /healthz: (849.777µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.460273  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.021718ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.460518  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0324 02:29:27.479407  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.078663ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.500327  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.93561ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.500643  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0324 02:29:27.520679  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.382152ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.540066  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.799634ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.540281  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0324 02:29:27.542357  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.542386  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.542427  108224 httplog.go:90] GET /healthz: (2.306222ms) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:27.552535  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.552562  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.552600  108224 httplog.go:90] GET /healthz: (1.556496ms) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.561091  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.847063ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.580664  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.65096ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.580899  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0324 02:29:27.599557  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.291734ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.620091  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.854465ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.620305  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0324 02:29:27.639417  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.098836ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.640689  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.640748  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.640787  108224 httplog.go:90] GET /healthz: (726.492µs) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:27.651817  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.651842  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.651890  108224 httplog.go:90] GET /healthz: (747.293µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.660204  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.997394ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.660443  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0324 02:29:27.679254  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.015403ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.699781  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.538532ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.700015  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0324 02:29:27.719763  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.479277ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.741220  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.136096ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.741738  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0324 02:29:27.743061  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.743087  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.743133  108224 httplog.go:90] GET /healthz: (3.047859ms) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:27.752011  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.752044  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.752080  108224 httplog.go:90] GET /healthz: (1.022987ms) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.759305  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.061522ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.779784  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.587882ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.780001  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0324 02:29:27.799676  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.378541ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.822734  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.897192ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.823050  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0324 02:29:27.840004  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.59241ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.840872  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.840895  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.840940  108224 httplog.go:90] GET /healthz: (807.923µs) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:27.852913  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.852937  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.852971  108224 httplog.go:90] GET /healthz: (745.148µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.860199  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.962436ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.860484  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0324 02:29:27.881744  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (3.446802ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.900468  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.179635ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.900719  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0324 02:29:27.919849  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (982.389µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.940447  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.177854ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:27.940706  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0324 02:29:27.942540  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.942569  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.942617  108224 httplog.go:90] GET /healthz: (2.468443ms) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:27.951795  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:27.951826  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:27.951857  108224 httplog.go:90] GET /healthz: (781.42µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.959429  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.127713ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.980222  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.9262ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:27.980619  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0324 02:29:27.999332  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.043429ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.020501  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.204553ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.020819  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0324 02:29:28.043789  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.043810  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.043847  108224 httplog.go:90] GET /healthz: (1.297538ms) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:28.044116  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.479489ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.051841  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.051868  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.051903  108224 httplog.go:90] GET /healthz: (856.568µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.061409  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.157793ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.061657  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0324 02:29:28.079474  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.176261ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.103776  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.501678ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.104065  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0324 02:29:28.119326  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.067909ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.140426  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.945812ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.140740  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0324 02:29:28.141156  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.141174  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.141215  108224 httplog.go:90] GET /healthz: (1.013523ms) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:28.151828  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.151860  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.151899  108224 httplog.go:90] GET /healthz: (799.207µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.162327  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (3.946137ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.180010  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.763474ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.180256  108224 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0324 02:29:28.199461  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.152998ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.203732  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.731102ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.221617  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.234811ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.221883  108224 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0324 02:29:28.239331  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.029454ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.240760  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.240783  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.240813  108224 httplog.go:90] GET /healthz: (710.513µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:28.240938  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.128362ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.251761  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.251789  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.251835  108224 httplog.go:90] GET /healthz: (759.916µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.260053  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.827213ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.260306  108224 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0324 02:29:28.280601  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.069011ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.282530  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.404516ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.300055  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.687098ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.300439  108224 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0324 02:29:28.319953  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.676479ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.321563  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.117219ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.340087  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.818098ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.340288  108224 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0324 02:29:28.341849  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.341873  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.341907  108224 httplog.go:90] GET /healthz: (1.569342ms) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:28.351936  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.351969  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.352008  108224 httplog.go:90] GET /healthz: (849.019µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.359648  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.163794ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.361167  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.037517ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.380120  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.773335ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.380452  108224 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0324 02:29:28.399559  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.097574ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.401163  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.158595ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.420236  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.953452ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.420628  108224 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0324 02:29:28.439767  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.458812ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.441128  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.441161  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.441208  108224 httplog.go:90] GET /healthz: (918.915µs) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:28.441944  108224 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.342912ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.453163  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.453190  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.453224  108224 httplog.go:90] GET /healthz: (2.138089ms) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.459872  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.67117ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.460124  108224 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0324 02:29:28.479328  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.042126ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.480841  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.075328ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.501578  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.349968ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.501821  108224 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0324 02:29:28.519936  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.528955ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.521435  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.052465ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.540998  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.676525ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39886]
I0324 02:29:28.542120  108224 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0324 02:29:28.542752  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.542783  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.542821  108224 httplog.go:90] GET /healthz: (2.672278ms) 0 [Go-http-client/1.1 127.0.0.1:39910]
I0324 02:29:28.551923  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.551953  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.551995  108224 httplog.go:90] GET /healthz: (950.334µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.562289  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (4.002947ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.573155  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (10.428855ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.581564  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.305106ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.581937  108224 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0324 02:29:28.599325  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.071021ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.600778  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.057924ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.620618  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.34071ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.621038  108224 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0324 02:29:28.639576  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.25751ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.640778  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.640806  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.640836  108224 httplog.go:90] GET /healthz: (762.33µs) 0 [Go-http-client/1.1 127.0.0.1:39886]
I0324 02:29:28.641236  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.26862ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.651759  108224 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0324 02:29:28.651786  108224 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0324 02:29:28.651847  108224 httplog.go:90] GET /healthz: (806.315µs) 0 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.660125  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.911869ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.660522  108224 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0324 02:29:28.679126  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (981.702µs) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.680480  108224 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.036634ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.699719  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.478675ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.699896  108224 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0324 02:29:28.719434  108224 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.044665ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.720975  108224 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.06932ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.740078  108224 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.800663ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39910]
I0324 02:29:28.740319  108224 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0324 02:29:28.741009  108224 httplog.go:90] GET /healthz: (946.378µs) 200 [Go-http-client/1.1 127.0.0.1:39886]
W0324 02:29:28.742706  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.742756  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.742782  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.742817  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0324 02:29:28.742993  108224 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0324 02:29:28.743052  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.743064  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.743095  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.743111  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.743133  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.743149  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.743165  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.743178  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0324 02:29:28.743189  108224 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0324 02:29:28.743242  108224 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0324 02:29:28.743252  108224 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0324 02:29:28.743669  108224 reflector.go:120] Starting reflector *v1.DaemonSet (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0324 02:29:28.743693  108224 reflector.go:158] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0324 02:29:28.744040  108224 reflector.go:120] Starting reflector *v1.ControllerRevision (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0324 02:29:28.744052  108224 reflector.go:158] Listing and watching *v1.ControllerRevision from k8s.io/client-go/informers/factory.go:134
I0324 02:29:28.744363  108224 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0324 02:29:28.744376  108224 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0324 02:29:28.744458  108224 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (531.553µs) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-informers 127.0.0.1:39886]
I0324 02:29:28.744797  108224 reflector.go:120] Starting reflector *v1.Node (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0324 02:29:28.744811  108224 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0324 02:29:28.744970  108224 httplog.go:90] GET /apis/apps/v1/controllerrevisions?limit=500&resourceVersion=0: (373.917µs) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-informers 127.0.0.1:39910]
I0324 02:29:28.745417  108224 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (443.766µs) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-informers 127.0.0.1:39886]
I0324 02:29:28.745498  108224 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (343.074µs) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-informers 127.0.0.1:40144]
I0324 02:29:28.745518  108224 get.go:250] Starting watch for /apis/apps/v1/daemonsets, rv=11651 labels= fields= timeout=5m29s
I0324 02:29:28.745648  108224 get.go:250] Starting watch for /apis/apps/v1/controllerrevisions, rv=11654 labels= fields= timeout=5m53s
I0324 02:29:28.745746  108224 daemon_controller.go:267] Starting daemon sets controller
I0324 02:29:28.745771  108224 shared_informer.go:197] Waiting for caches to sync for daemon sets
I0324 02:29:28.745782  108224 shared_informer.go:227] caches populated
I0324 02:29:28.745788  108224 shared_informer.go:204] Caches are synced for daemon sets 
I0324 02:29:28.746063  108224 get.go:250] Starting watch for /api/v1/pods, rv=11631 labels= fields= timeout=6m29s
I0324 02:29:28.746079  108224 shared_informer.go:227] caches populated
I0324 02:29:28.746093  108224 get.go:250] Starting watch for /api/v1/nodes, rv=11631 labels= fields= timeout=7m25s
I0324 02:29:28.748249  108224 daemon_controller.go:177] Adding daemon set foo
I0324 02:29:28.748306  108224 httplog.go:90] POST /apis/apps/v1/namespaces/simple-daemonset-test/daemonsets: (2.072497ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40146]
I0324 02:29:28.750301  108224 httplog.go:90] POST /apis/apps/v1/namespaces/simple-daemonset-test/controllerrevisions: (1.491342ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-controller 127.0.0.1:40148]
I0324 02:29:28.750522  108224 controller_utils.go:201] Controller simple-daemonset-test/foo either never recorded expectations, or the ttl expired.
I0324 02:29:28.750557  108224 controller_utils.go:218] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"simple-daemonset-test/foo", timestamp:time.Time{wall:0xbf967a022cbc8987, ext:78643967461, loc:(*time.Location)(0x749bd00)}}
I0324 02:29:28.750631  108224 daemon_controller.go:1012] Nodes needing daemon pods for daemon set foo: [], creating 0
I0324 02:29:28.750659  108224 daemon_controller.go:1076] Pods to delete for daemon set foo: [], deleting 0
I0324 02:29:28.750667  108224 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"simple-daemonset-test/foo", timestamp:time.Time{wall:0xbf967a022cbc8987, ext:78643967461, loc:(*time.Location)(0x749bd00)}}
I0324 02:29:28.750686  108224 daemon_controller.go:1143] Updating daemon set status
I0324 02:29:28.750788  108224 daemon_controller.go:387] ControllerRevision foo-8c7fb5b5c added.
I0324 02:29:28.752291  108224 httplog.go:90] GET /healthz: (1.094717ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40150]
I0324 02:29:28.752476  108224 httplog.go:90] POST /api/v1/nodes: (1.698077ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0324 02:29:28.752789  108224 node_tree.go:93] Added node "node-0" in group "" to NodeTree
I0324 02:29:28.753123  108224 httplog.go:90] PUT /apis/apps/v1/namespaces/simple-daemonset-test/daemonsets/foo/status: (2.034651ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-controller 127.0.0.1:40146]
I0324 02:29:28.753323  108224 daemon_controller.go:1205] Finished syncing daemon set "simple-daemonset-test/foo" (5.023043ms)
I0324 02:29:28.753362  108224 daemon_controller.go:183] Updating daemon set foo
I0324 02:29:28.753524  108224 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"simple-daemonset-test/foo", timestamp:time.Time{wall:0xbf967a022cbc8987, ext:78643967461, loc:(*time.Location)(0x749bd00)}}
I0324 02:29:28.753582  108224 controller_utils.go:218] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"simple-daemonset-test/foo", timestamp:time.Time{wall:0xbf967a022ceab14f, ext:78646992306, loc:(*time.Location)(0x749bd00)}}
I0324 02:29:28.753600  108224 daemon_controller.go:1012] Nodes needing daemon pods for daemon set foo: [node-0], creating 1
I0324 02:29:28.755108  108224 httplog.go:90] GET /api/v1/namespaces/default: (2.485653ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40150]
I0324 02:29:28.756327  108224 httplog.go:90] POST /api/v1/nodes: (3.412088ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0324 02:29:28.756836  108224 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I0324 02:29:28.757222  108224 httplog.go:90] POST /api/v1/namespaces: (1.751366ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40150]
I0324 02:29:28.757273  108224 scheduling_queue.go:830] About to try and schedule pod simple-daemonset-test/foo-hxrlm
I0324 02:29:28.757287  108224 scheduler.go:532] Attempting to schedule pod: simple-daemonset-test/foo-hxrlm
I0324 02:29:28.757329  108224 daemon_controller.go:506] Pod foo-hxrlm added.
I0324 02:29:28.757340  108224 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"simple-daemonset-test/foo", timestamp:time.Time{wall:0xbf967a022ceab14f, ext:78646992306, loc:(*time.Location)(0x749bd00)}}
I0324 02:29:28.757509  108224 scheduler_binder.go:256] AssumePodVolumes for pod "simple-daemonset-test/foo-hxrlm", node "node-0"
I0324 02:29:28.757524  108224 scheduler_binder.go:266] AssumePodVolumes for pod "simple-daemonset-test/foo-hxrlm", node "node-0": all PVCs bound and nothing to do
I0324 02:29:28.757534  108224 httplog.go:90] POST /api/v1/namespaces/simple-daemonset-test/pods: (3.495274ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-controller 127.0.0.1:40146]
I0324 02:29:28.757618  108224 factory.go:610] Attempting to bind foo-hxrlm to node-0
I0324 02:29:28.758665  108224 controller_utils.go:588] Controller foo created pod foo-hxrlm
I0324 02:29:28.758700  108224 daemon_controller.go:1076] Pods to delete for daemon set foo: [], deleting 0
I0324 02:29:28.758711  108224 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"simple-daemonset-test/foo", timestamp:time.Time{wall:0xbf967a022ceab14f, ext:78646992306, loc:(*time.Location)(0x749bd00)}}
I0324 02:29:28.758748  108224 daemon_controller.go:1143] Updating daemon set status
I0324 02:29:28.759240  108224 event.go:274] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"simple-daemonset-test", Name:"foo", UID:"3e4269d8-b20b-4079-bb9d-e9f0e2b72e95", APIVersion:"apps/v1", ResourceVersion:"12111", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: foo-hxrlm
I0324 02:29:28.759889  108224 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.394814ms) 404 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40150]
I0324 02:29:28.760124  108224 httplog.go:90] POST /api/v1/nodes: (3.06841ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0324 02:29:28.760227  108224 node_tree.go:93] Added node "node-2" in group "" to NodeTree
I0324 02:29:28.762059  108224 httplog.go:90] PUT /apis/apps/v1/namespaces/simple-daemonset-test/daemonsets/foo/status: (2.330475ms) 200 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-controller 127.0.0.1:40146]
I0324 02:29:28.762204  108224 daemon_controller.go:183] Updating daemon set foo
I0324 02:29:28.762530  108224 daemon_controller.go:1205] Finished syncing daemon set "simple-daemonset-test/foo" (9.167137ms)
I0324 02:29:28.762609  108224 httplog.go:90] POST /api/v1/namespaces/simple-daemonset-test/events: (2.346004ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-controller 127.0.0.1:40154]
I0324 02:29:28.762625  108224 httplog.go:90] POST /api/v1/nodes: (2.122219ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0324 02:29:28.762721  108224 controller_utils.go:184] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"simple-daemonset-test/foo", timestamp:time.Time{wall:0xbf967a022ceab14f, ext:78646992306, loc:(*time.Location)(0x749bd00)}}
I0324 02:29:28.762806  108224 controller_utils.go:218] Setting expectations &controller.ControlleeExpectations{add:2, del:0, key:"simple-daemonset-test/foo", timestamp:time.Time{wall:0xbf967a022d7772be, ext:78656216875, loc:(*time.Location)(0x749bd00)}}
I0324 02:29:28.762826  108224 daemon_controller.go:1012] Nodes needing daemon pods for daemon set foo: [node-1 node-2], creating 2
I0324 02:29:28.763139  108224 node_tree.go:93] Added node "node-3" in group "" to NodeTree
I0324 02:29:28.764787  108224 httplog.go:90] POST /api/v1/namespaces/default/services: (4.474872ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40150]
I0324 02:29:28.765509  108224 httplog.go:90] POST /api/v1/namespaces/simple-daemonset-test/pods: (2.406708ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format/daemonset-controller 127.0.0.1:40146]
I0324 02:29:28.765537  108224 httplog.go:90] POST /api/v1/nodes: (2.361935ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40148]
I0324 02:29:28.765711  108224 controller_utils.go:588] Controller foo created pod foo-5gq9t
I0324 02:29:28.765791  108224 scheduling_queue.go:830] About to try and schedule pod simple-daemonset-test/foo-5gq9t
I0324 02:29:28.765803  108224 scheduler.go:532] Attempting to schedule pod: simple-daemonset-test/foo-5gq9t
I0324 02:29:28.765846  108224 daemon_controller.go:506] Pod foo-5gq9t added.
I0324 02:29:28.765854  108224 controller_utils.go:235] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"simple-daemonset-test/foo", timestamp:time.Time{wall:0xbf967a022d7772be, ext:78656216875, loc:(*time.Location)(0x749bd00)}}
I0324 02:29:28.765909  108224 node_tree.go:93] Added node "node-4" in group "" to NodeTree
I0324 02:29:28.765934  108224 event.go:274] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"simple-daemonset-test", Name:"foo", UID:"3e4269d8-b20b-4079-bb9d-e9f0e2b72e95", APIVersion:"apps/v1", ResourceVersion:"12117", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: foo-5gq9t
I0324 02:29:28.766090  108224 factory.go:545] Unable to schedule simple-daemonset-test/foo-5gq9t: no fit: 0/4 nodes are available: 3 node(s) didn't match node selector.; waiting
I0324 02:29:28.766124  108224 factory.go:619] Updating pod condition for simple-daemonset-test/foo-5gq9t to (PodScheduled==False, Reason=Unschedulable)
I0324 02:29:28.766148  108224 httplog.go:90] POST /api/v1/namespaces/simple-daemonset-test/pods/foo-hxrlm/binding: (4.468889ms) 201 [daemonset.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40152]
I0324 02:29:28.766310  108224 scheduler.go:669] pod simple-daemonset-test/foo-hxrlm is bound successfully on node "node-0", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<0>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<100>|StorageEphemeral<0>.".
E0324 02:29:28.766024  108224 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 11876 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x3c01020, 0x744d540)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x3c01020, 0x744d540)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/pkg/scheduler/core.(*genericScheduler).podFitsOnNode(0xc000a0c6c0, 0xc005078a60, 0xc00eee0c00, 0x4ec7f40, 0xc004991cb0, 0x0, 0xc0050d6c90, 0x4f72040, 0xc00f980e70, 0xc00c547f00, ...)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:676 +0x1a1
k8s.io/kubernetes/pkg/scheduler/core.(*genericScheduler).findNodesThatFit.func1(0x2)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:492 +0x1b1
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil.func1(0xc00201e7f0, 0xc00f61b480, 0xc004b7c530, 0xc00f8555e0)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:57 +0xbf
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:49 +0x148
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x1698261]

goroutine 11876 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x3c01020, 0x744d540)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/pkg/scheduler/core.(*genericScheduler).podFitsOnNode(0xc000a0c6c0, 0xc005078a60, 0xc00eee0c00, 0x4ec7f40, 0xc004991cb0, 0x0, 0xc0050d6c90, 0x4f72040, 0xc00f980e70, 0xc00c547f00, ...)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:676 +0x1a1
k8s.io/kubernetes/pkg/scheduler/core.(*genericScheduler).findNodesThatFit.func1(0x2)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/scheduler/core/generic_scheduler.go:492 +0x1b1
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil.func1(0xc00201e7f0, 0xc00f61b480, 0xc004b7c530, 0xc00f8555e0)
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:57 +0xbf
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:49 +0x148
FAIL	k8s.io/kubernetes/test/integration/daemonset	78.726s

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20200324-022610.xml

Find simple-daemonset-test/foo-hxrlm mentions in log files | View test history on testgrid


Show 2857 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 622 lines ...
stat /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/third_party/etcd/Documentation/README.md: no such file or directory
+++ [0324 02:20:48] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0324 02:21:15] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0324 02:21:16.358425   53292 serving.go:319] Generated self-signed cert in-memory
W0324 02:21:16.890536   53292 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0324 02:21:16.890587   53292 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0324 02:21:16.890594   53292 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0324 02:21:16.890609   53292 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0324 02:21:16.890622   53292 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0324 02:21:16.890645   53292 controllermanager.go:161] Version: v1.16.9-beta.0.7+5116ee4b159565
I0324 02:21:16.891364   53292 secure_serving.go:123] Serving securely on [::]:10257
I0324 02:21:16.891865   53292 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0324 02:21:16.891938   53292 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...
+++ [0324 02:21:16] On try 2, controller-manager: ok
... skipping 17 lines ...
I0324 02:21:17.108690   53292 shared_informer.go:197] Waiting for caches to sync for PVC protection
I0324 02:21:17.109042   53292 controllermanager.go:534] Started "daemonset"
W0324 02:21:17.109124   53292 controllermanager.go:513] "tokencleaner" is disabled
I0324 02:21:17.109068   53292 daemon_controller.go:267] Starting daemon sets controller
I0324 02:21:17.109237   53292 shared_informer.go:197] Waiting for caches to sync for daemon sets
I0324 02:21:17.109470   53292 node_lifecycle_controller.go:77] Sending events to api server
E0324 02:21:17.109507   53292 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0324 02:21:17.109521   53292 controllermanager.go:526] Skipping "cloud-node-lifecycle"
I0324 02:21:17.110253   53292 controllermanager.go:534] Started "persistentvolume-binder"
I0324 02:21:17.110652   53292 pv_controller_base.go:282] Starting persistent volume controller
I0324 02:21:17.110674   53292 shared_informer.go:197] Waiting for caches to sync for persistent volume
I0324 02:21:17.110760   53292 controllermanager.go:534] Started "clusterrole-aggregation"
W0324 02:21:17.110776   53292 controllermanager.go:526] Skipping "root-ca-cert-publisher"
... skipping 34 lines ...
I0324 02:21:17.321999   53292 controllermanager.go:534] Started "namespace"
I0324 02:21:17.322136   53292 namespace_controller.go:186] Starting namespace controller
I0324 02:21:17.322160   53292 shared_informer.go:197] Waiting for caches to sync for namespace
I0324 02:21:17.322469   53292 controllermanager.go:534] Started "csrcleaner"
W0324 02:21:17.322484   53292 controllermanager.go:513] "bootstrapsigner" is disabled
I0324 02:21:17.322549   53292 cleaner.go:81] Starting CSR cleaner controller
E0324 02:21:17.322985   53292 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0324 02:21:17.323002   53292 controllermanager.go:526] Skipping "service"
I0324 02:21:17.323385   53292 controllermanager.go:534] Started "pv-protection"
I0324 02:21:17.323564   53292 pv_protection_controller.go:81] Starting PV protection controller
I0324 02:21:17.323588   53292 shared_informer.go:197] Waiting for caches to sync for PV protection
I0324 02:21:17.323825   53292 controllermanager.go:534] Started "replicationcontroller"
I0324 02:21:17.323969   53292 replica_set.go:182] Starting replicationcontroller controller
... skipping 45 lines ...
I0324 02:21:17.737945   53292 controllermanager.go:534] Started "statefulset"
W0324 02:21:17.737997   53292 controllermanager.go:526] Skipping "csrsigning"
I0324 02:21:17.739037   53292 stateful_set.go:145] Starting stateful set controller
I0324 02:21:17.739054   53292 shared_informer.go:197] Waiting for caches to sync for stateful set
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests
W0324 02:21:17.767309   53292 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist

+++ Running case: test-cmd.run_kubectl_version_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_version_tests
+++ [0324 02:21:17] Testing kubectl version
I0324 02:21:17.807744   53292 shared_informer.go:204] Caches are synced for endpoint 
... skipping 36 lines ...
Successful: --output json has correct client info
(BI0324 02:21:18.311405   53292 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
Successful: --output json has correct server info
(BI0324 02:21:18.315315   53292 shared_informer.go:204] Caches are synced for disruption 
I0324 02:21:18.315359   53292 disruption.go:338] Sending events to api server.
+++ [0324 02:21:18] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
E0324 02:21:18.320799   53292 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0324 02:21:18.320841   53292 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0324 02:21:18.327877   53292 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0324 02:21:18.333221   53292 shared_informer.go:204] Caches are synced for deployment 
E0324 02:21:18.340084   53292 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0324 02:21:18.414111   53292 shared_informer.go:204] Caches are synced for resource quota 
I0324 02:21:18.432452   53292 shared_informer.go:204] Caches are synced for garbage collector 
I0324 02:21:18.432488   53292 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Successful: --client --output json has correct client info
(BSuccessful: --client --output json has no server info
(B+++ [0324 02:21:18] Testing kubectl version: compare json output using additional --short flag
... skipping 50 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0324 02:21:21] Creating namespace namespace-1585016481-15045
namespace/namespace-1585016481-15045 created
Context "test" modified.
+++ [0324 02:21:21] Testing RESTMapper
+++ [0324 02:21:21] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 590 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 12 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
Name:         env-test-pod
Namespace:    test-kubectl-describe-pod
Priority:     0
... skipping 176 lines ...
(Bpod/valid-pod patched
core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
(Bpod/valid-pod patched
core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0324 02:21:56] "kubectl patch with resourceVersion 498" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0324 02:21:57.548069   53292 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test replaced
core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
(BI0324 02:21:57.811022   53292 event.go:274] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"dc3e2f57-ba82-4f04-ae15-b2e491fd8830", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
node "node-v1-test" deleted
core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 23 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 86 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0324 02:22:06] Creating namespace namespace-1585016526-24899
namespace/namespace-1585016526-24899 created
Context "test" modified.
+++ [0324 02:22:06] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0324 02:22:07] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0324 02:22:09.567646   49749 client.go:357] parsed scheme: "endpoint"
I0324 02:22:09.567694   49749 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0324 02:22:09.578201   49749 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 96 lines ...
Context "test" modified.
+++ [0324 02:22:12] Testing kubectl create filter
create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I0324 02:22:14.815032   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016532-21473", Name:"nginx", UID:"ef181172-6572-4455-b8a4-fb38c76faeda", APIVersion:"apps/v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
I0324 02:22:14.819080   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016532-21473", Name:"nginx-8484dd655", UID:"67127278-9312-4d81-a23e-f93b71e78a0d", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-s99pc
I0324 02:22:14.821982   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016532-21473", Name:"nginx-8484dd655", UID:"67127278-9312-4d81-a23e-f93b71e78a0d", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-k62d2
I0324 02:22:14.822772   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016532-21473", Name:"nginx-8484dd655", UID:"67127278-9312-4d81-a23e-f93b71e78a0d", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-7s2pb
apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1585016532-21473\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1585016532-21473"
Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1585016532-21473\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2020-03-24T02:22:14Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1585016532-21473" "resourceVersion":"592" "selfLink":"/apis/apps/v1/namespaces/namespace-1585016532-21473/deployments/nginx" "uid":"ef181172-6572-4455-b8a4-fb38c76faeda"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2020-03-24T02:22:14Z" "lastUpdateTime":"2020-03-24T02:22:14Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2020-03-24T02:22:14Z" "lastUpdateTime":"2020-03-24T02:22:14Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
I0324 02:22:21.124395   53292 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1585016524-13039
deployment.apps/nginx configured
I0324 02:22:24.344585   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016532-21473", Name:"nginx", UID:"d0e4c680-e57c-4a0a-9395-42a17de2c402", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
I0324 02:22:24.350921   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016532-21473", Name:"nginx-668b6c7744", UID:"d4936486-a6ed-49c7-b84d-59b5f0dbca60", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-jbpbl
I0324 02:22:24.357170   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016532-21473", Name:"nginx-668b6c7744", UID:"d4936486-a6ed-49c7-b84d-59b5f0dbca60", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-qxnnv
I0324 02:22:24.357323   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016532-21473", Name:"nginx-668b6c7744", UID:"d4936486-a6ed-49c7-b84d-59b5f0dbca60", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-n8852
... skipping 142 lines ...
+++ [0324 02:22:31] Creating namespace namespace-1585016551-20794
namespace/namespace-1585016551-20794 created
Context "test" modified.
+++ [0324 02:22:31] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1585016551-20794 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1585016551-20794 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
FAIL!
message:Error from server (NotFound): pods "abc" not found
has not:List
99 /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
Successful
message:I0324 02:22:33.505942   63372 loader.go:375] Config loaded from file:  /tmp/tmp.EnH6K4Ke7i/.kube/config
I0324 02:22:33.507431   63372 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0324 02:22:33.531706   63372 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
Successful
message:NAME    DATA   AGE
one     0      1s
three   0      1s
two     0      1s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [0324 02:22:40] Creating namespace namespace-1585016560-26742
namespace/namespace-1585016560-26742 created
Context "test" modified.
get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 56 lines ...
}
get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-03-24T02:22:40Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1585016560-26742", "resourceVersion":"694", "selfLink":"/api/v1/namespaces/namespace-1585016560-26742/pods/valid-pod", "uid":"40eeb263-f32d-462d-b6b0-7abec184e778"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-03-24T02:22:40Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1585016560-26742","resourceVersion":"694","selfLink":"/api/v1/namespaces/namespace-1585016560-26742/pods/valid-pod","uid":"40eeb263-f32d-462d-b6b0-7abec184e778"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-03-24T02:22:40Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1585016560-26742 resourceVersion:694 selfLink:/api/v1/namespaces/namespace-1585016560-26742/pods/valid-pod uid:40eeb263-f32d-462d-b6b0-7abec184e778] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
has not:STATUS
Successful
message:pod/valid-pod
... skipping 72 lines ...
status:
  phase: Pending
  qosClass: Guaranteed
---
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 35 lines ...
+++ command: run_kubectl_exec_pod_tests
+++ [0324 02:22:46] Creating namespace namespace-1585016566-13955
namespace/namespace-1585016566-13955 created
Context "test" modified.
+++ [0324 02:22:46] Testing kubectl exec POD COMMAND
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 2 lines ...
+++ command: run_kubectl_exec_resource_name_tests
+++ [0324 02:22:46] Creating namespace namespace-1585016566-8587
namespace/namespace-1585016566-8587 created
Context "test" modified.
+++ [0324 02:22:46] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:error: the server doesn't have a resource type "foo"
has:error:
Successful
message:Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0324 02:22:47.381118   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016566-8587", Name:"frontend", UID:"a1ae75db-d626-49fc-a55c-1ddd36aa30e9", APIVersion:"apps/v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h6s9v
I0324 02:22:47.384184   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016566-8587", Name:"frontend", UID:"a1ae75db-d626-49fc-a55c-1ddd36aa30e9", APIVersion:"apps/v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7qjld
I0324 02:22:47.384222   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016566-8587", Name:"frontend", UID:"a1ae75db-d626-49fc-a55c-1ddd36aa30e9", APIVersion:"apps/v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-62n9q
configmap/test-set-env-config created
Successful
message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
Successful
message:Error from server (BadRequest): pod frontend-62n9q does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod frontend-62n9q does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"ebc880f4-117f-4c39-a8a6-7f3a8d4d4f3b","resourceVersion":"768","creationTimestamp":"2020-03-24T02:22:48Z"}}
... skipping 2 lines ...
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"ebc880f4-117f-4c39-a8a6-7f3a8d4d4f3b","resourceVersion":"769","creationTimestamp":"2020-03-24T02:22:48Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"ebc880f4-117f-4c39-a8a6-7f3a8d4d4f3b"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
valid-pod   0/1     Pending   0          0s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 158 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0324 02:22:58] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 189 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
Recording: run_cmd_with_img_tests
... skipping 11 lines ...
I0324 02:23:32.538633   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016612-9922", Name:"test1-6cdffdb5b8", UID:"e6cd7f29-1be4-4a1b-a29c-fef5b3215eef", APIVersion:"apps/v1", ResourceVersion:"927", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-jhs94
Successful
message:deployment.apps/test1 created
has:deployment.apps/test1 created
deployment.apps "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
+++ [0324 02:23:32] Testing recursive resources
+++ [0324 02:23:32] Creating namespace namespace-1585016612-22050
namespace/namespace-1585016612-22050 created
W0324 02:23:32.904417   49749 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0324 02:23:32.905673   53292 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
W0324 02:23:32.998490   49749 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0324 02:23:32.999879   53292 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0324 02:23:33.089682   49749 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0324 02:23:33.090799   53292 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0324 02:23:33.192053   49749 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0324 02:23:33.193445   53292 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
E0324 02:23:33.906865   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0324 02:23:34.001087   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Name:         busybox0
Namespace:    namespace-1585016612-22050
Priority:     0
Node:         <none>
Labels:       app=busybox0
... skipping 98 lines ...
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:app=busybox1
E0324 02:23:34.092063   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Name:         busybox0
Namespace:    namespace-1585016612-22050
Priority:     0
Node:         <none>
Labels:       app=busybox0
... skipping 43 lines ...
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0324 02:23:34.194516   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:23:34.908213   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:35.002303   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx created
I0324 02:23:35.039706   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016612-22050", Name:"nginx", UID:"00e2fcd0-c6a6-4377-98cd-2f40d50028cf", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I0324 02:23:35.045307   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016612-22050", Name:"nginx-f87d999f7", UID:"225d2e2e-3169-46ad-a7cc-00dcab6895c8", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-2jjpf
I0324 02:23:35.047100   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016612-22050", Name:"nginx-f87d999f7", UID:"225d2e2e-3169-46ad-a7cc-00dcab6895c8", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-pd5qg
I0324 02:23:35.049198   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016612-22050", Name:"nginx-f87d999f7", UID:"225d2e2e-3169-46ad-a7cc-00dcab6895c8", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-hnksg
E0324 02:23:35.093399   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE0324 02:23:35.195812   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bkubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
generic-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
(BSuccessful
message:apiVersion: extensions/v1beta1
... skipping 41 lines ...
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0324 02:23:35.909372   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0324 02:23:36.003598   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
E0324 02:23:36.094583   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0324 02:23:36.197004   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BI0324 02:23:36.628326   53292 namespace_controller.go:171] Namespace has been deleted non-native-resources
generic-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:23:36.910530   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
E0324 02:23:37.004667   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:23:37.006538   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016612-22050", Name:"busybox0", UID:"db5c9d71-5e7c-4fd9-9614-353abfdf23a4", APIVersion:"v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-ck7b9
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0324 02:23:37.010959   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016612-22050", Name:"busybox1", UID:"4e09e672-77cb-468e-ab94-5f13f909ff0f", APIVersion:"v1", ResourceVersion:"985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-k4hht
E0324 02:23:37.095718   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0324 02:23:37.198211   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0324 02:23:37.911829   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(BE0324 02:23:38.005961   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE0324 02:23:38.096941   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:38.199366   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0324 02:23:38.657664   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016612-22050", Name:"busybox0", UID:"db5c9d71-5e7c-4fd9-9614-353abfdf23a4", APIVersion:"v1", ResourceVersion:"1004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-sjwv5
I0324 02:23:38.666474   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016612-22050", Name:"busybox1", UID:"4e09e672-77cb-468e-ab94-5f13f909ff0f", APIVersion:"v1", ResourceVersion:"1009", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-dkgk2
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E0324 02:23:38.913107   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0324 02:23:39.007240   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:39.098152   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:23:39.200680   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx1-deployment created
I0324 02:23:39.350935   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016612-22050", Name:"nginx1-deployment", UID:"d51fe58d-21db-49c7-9bed-0477d8e62585", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0324 02:23:39.353210   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016612-22050", Name:"nginx1-deployment-7bdbbfb5cf", UID:"3c6901c8-dad7-42cc-90cb-a030cae8fc10", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-p86wq
I0324 02:23:39.356234   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016612-22050", Name:"nginx0-deployment", UID:"ce889114-e72c-4936-9fd9-b4d9a793df80", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
I0324 02:23:39.358154   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016612-22050", Name:"nginx1-deployment-7bdbbfb5cf", UID:"3c6901c8-dad7-42cc-90cb-a030cae8fc10", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-rgzhl
I0324 02:23:39.360581   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016612-22050", Name:"nginx0-deployment-57c6bff7f6", UID:"62c4f3f7-7068-4b37-94c3-1de08d72ea45", APIVersion:"apps/v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-jxb9b
I0324 02:23:39.365896   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016612-22050", Name:"nginx0-deployment-57c6bff7f6", UID:"62c4f3f7-7068-4b37-94c3-1de08d72ea45", APIVersion:"apps/v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-ppv4r
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
E0324 02:23:39.914474   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0324 02:23:40.008553   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx1-deployment resumed
deployment.apps/nginx0-deployment resumed
E0324 02:23:40.099268   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0324 02:23:40.201756   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
E0324 02:23:40.915741   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:41.009950   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:41.100442   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:41.203080   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0324 02:23:41.563566   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016612-22050", Name:"busybox0", UID:"c4e8592b-1bb5-41dc-88bc-cad362861feb", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-nq2gg
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0324 02:23:41.567850   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016612-22050", Name:"busybox1", UID:"95dad932-2b81-41a7-a824-9481be619e25", APIVersion:"v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-m5t7d
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
E0324 02:23:41.917040   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
E0324 02:23:42.011152   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
E0324 02:23:42.101736   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:42.204315   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:42.918230   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:43.012393   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0324 02:23:43] Testing kubectl(v1:namespaces)
E0324 02:23:43.103076   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace created
E0324 02:23:43.205649   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
E0324 02:23:43.919562   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:44.013612   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:44.104307   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:44.207001   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:44.920839   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:45.015067   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:45.105622   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:45.208417   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:45.922107   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:46.016262   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:46.106802   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:46.209728   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:46.923324   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:47.017527   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:47.107978   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:47.211014   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:47.924641   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:48.018730   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:48.109182   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:48.212370   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
... skipping 29 lines ...
namespace "namespace-1585016569-715" deleted
namespace "namespace-1585016570-15614" deleted
namespace "namespace-1585016572-8681" deleted
namespace "namespace-1585016574-26532" deleted
namespace "namespace-1585016612-22050" deleted
namespace "namespace-1585016612-9922" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1585016478-29731" deleted
... skipping 27 lines ...
namespace "namespace-1585016569-715" deleted
namespace "namespace-1585016570-15614" deleted
namespace "namespace-1585016572-8681" deleted
namespace "namespace-1585016574-26532" deleted
namespace "namespace-1585016612-22050" deleted
namespace "namespace-1585016612-9922" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(BE0324 02:23:48.925844   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/other created
E0324 02:23:49.019973   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
(BE0324 02:23:49.110420   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:23:49.213655   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/valid-pod created
core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
E0324 02:23:49.927081   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:50.021303   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:50.111646   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:50.214812   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:23:50.221992   53292 shared_informer.go:197] Waiting for caches to sync for resource quota
I0324 02:23:50.222048   53292 shared_informer.go:204] Caches are synced for resource quota 
I0324 02:23:50.639447   53292 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0324 02:23:50.639505   53292 shared_informer.go:204] Caches are synced for garbage collector 
E0324 02:23:50.928437   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:51.022613   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:51.112861   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:51.215975   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:51.929673   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:52.023861   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:52.114120   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:52.217156   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:23:52.441361   53292 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1585016612-22050
I0324 02:23:52.445790   53292 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1585016612-22050
E0324 02:23:52.930987   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:53.025148   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:53.115454   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:53.218326   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:53.935543   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:54.026285   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:54.125567   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:54.219626   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:54.937030   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_secrets_test
Running command: run_secrets_test

E0324 02:23:55.027719   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ Running case: test-cmd.run_secrets_test 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_secrets_test
+++ [0324 02:23:55] Creating namespace namespace-1585016635-8209
E0324 02:23:55.126656   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1585016635-8209 created
Context "test" modified.
+++ [0324 02:23:55] Testing secrets
E0324 02:23:55.220969   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:23:55.260761   69604 loader.go:375] Config loaded from file:  /tmp/tmp.EnH6K4Ke7i/.kube/config
Successful
message:apiVersion: v1
data:
  key1: dmFsdWUx
kind: Secret
... skipping 32 lines ...
(Bnamespace/test-secrets created
core.sh:729: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
(Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
(BE0324 02:23:55.938187   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
E0324 02:23:56.028873   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:23:56.127887   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-secret created
E0324 02:23:56.222281   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(Bsecret "test-secret" deleted
core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(Bsecret "test-secret" deleted
E0324 02:23:56.939395   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-secret created
E0324 02:23:57.030177   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(BE0324 02:23:57.129500   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(BE0324 02:23:57.223605   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
secret/secret-string-data created
core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:23:57.940578   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
E0324 02:23:58.031423   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-secrets" deleted
E0324 02:23:58.130665   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:58.224839   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:23:58.454168   53292 namespace_controller.go:171] Namespace has been deleted my-namespace
I0324 02:23:58.839646   53292 namespace_controller.go:171] Namespace has been deleted kube-node-lease
I0324 02:23:58.856114   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016478-29731
I0324 02:23:58.860266   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016481-15045
I0324 02:23:58.862550   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016498-1427
I0324 02:23:58.864299   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016494-27756
I0324 02:23:58.871563   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016499-4206
I0324 02:23:58.873622   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016485-5381
I0324 02:23:58.879599   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016498-9799
I0324 02:23:58.884382   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016494-17356
I0324 02:23:58.900377   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016491-10524
E0324 02:23:58.941795   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:23:59.032637   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:23:59.039597   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016508-17901
I0324 02:23:59.042731   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016509-32741
I0324 02:23:59.044639   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016520-8921
I0324 02:23:59.045891   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016522-7757
I0324 02:23:59.050029   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016527-16613
I0324 02:23:59.055471   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016520-27941
I0324 02:23:59.056943   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016526-24899
I0324 02:23:59.070483   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016523-297
I0324 02:23:59.102414   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016524-13039
E0324 02:23:59.131847   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:23:59.139662   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016529-8067
E0324 02:23:59.226223   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:23:59.248479   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016532-24130
I0324 02:23:59.254789   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016550-11488
I0324 02:23:59.255694   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016549-12879
I0324 02:23:59.262005   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016566-13955
I0324 02:23:59.274069   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016551-20794
I0324 02:23:59.288758   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016569-4649
... skipping 3 lines ...
I0324 02:23:59.314359   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016566-8587
I0324 02:23:59.387492   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016570-15614
I0324 02:23:59.387781   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016572-8681
I0324 02:23:59.396151   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016574-26532
I0324 02:23:59.405913   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016612-9922
I0324 02:23:59.439092   53292 namespace_controller.go:171] Namespace has been deleted namespace-1585016612-22050
E0324 02:23:59.943408   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:23:59.944206   53292 namespace_controller.go:171] Namespace has been deleted other
E0324 02:24:00.033900   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:00.133206   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:00.227527   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:00.944659   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:01.035102   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:01.134448   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:01.228719   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:01.946017   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:02.036457   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:02.135754   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:02.230110   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:02.947333   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:03.037771   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:03.136641   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_configmap_tests
E0324 02:24:03.231401   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [0324 02:24:03] Creating namespace namespace-1585016643-12408
namespace/namespace-1585016643-12408 created
Context "test" modified.
+++ [0324 02:24:03] Testing configmaps
configmap/test-configmap created
core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
(Bconfigmap "test-configmap" deleted
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(Bnamespace/test-configmaps created
E0324 02:24:03.948493   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(BE0324 02:24:04.038957   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(BE0324 02:24:04.137852   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created
E0324 02:24:04.232365   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap/test-binary-configmap created
core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(Bconfigmap "test-configmap" deleted
configmap "test-binary-configmap" deleted
namespace "test-configmaps" deleted
E0324 02:24:04.949757   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:05.040181   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:05.139086   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:05.233587   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:05.951151   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:06.041398   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:06.140267   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:06.234739   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:06.952362   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:07.042657   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:07.141565   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:07.235995   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:07.953607   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:08.044002   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:24:08.134336   53292 namespace_controller.go:171] Namespace has been deleted test-secrets
E0324 02:24:08.142748   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:08.237228   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:08.954793   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:09.045331   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:09.144076   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:09.238444   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:09.956026   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [0324 02:24:10] Creating namespace namespace-1585016650-20287
E0324 02:24:10.046568   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1585016650-20287 created
E0324 02:24:10.145257   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0324 02:24:10] Testing client config
E0324 02:24:10.239550   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_service_accounts_tests
+++ [0324 02:24:10] Creating namespace namespace-1585016650-22132
E0324 02:24:10.957191   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1585016650-22132 created
Context "test" modified.
+++ [0324 02:24:11] Testing service accounts
E0324 02:24:11.047852   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
(BE0324 02:24:11.146636   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/test-service-accounts created
E0324 02:24:11.240791   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
(Bserviceaccount/test-service-account created
core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
(Bserviceaccount "test-service-account" deleted
namespace "test-service-accounts" deleted
E0324 02:24:11.958456   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:12.049184   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:12.147901   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:12.242237   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:12.959760   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:13.050525   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:13.149128   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:13.243568   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:13.961121   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:14.051798   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:14.150330   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:14.244781   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:24:14.939157   53292 namespace_controller.go:171] Namespace has been deleted test-configmaps
E0324 02:24:14.962386   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:15.053123   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:15.151727   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:15.246030   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:15.963790   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:16.054419   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:16.153045   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:16.247335   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_job_tests
Running command: run_job_tests

+++ Running case: test-cmd.run_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_job_tests
+++ [0324 02:24:16] Creating namespace namespace-1585016656-28566
namespace/namespace-1585016656-28566 created
Context "test" modified.
+++ [0324 02:24:16] Testing job
E0324 02:24:16.964922   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
(BE0324 02:24:17.055874   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/test-jobs created
E0324 02:24:17.154137   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
(Bkubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/pi created
E0324 02:24:17.248412   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
(BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pi     59 23 31 2 *   False     0        <none>          0s
E0324 02:24:17.425103   53292 cronjob_controller.go:272] Cannot determine if test-jobs/pi needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew
I0324 02:24:17.425199   53292 event.go:274] Event(v1.ObjectReference{Kind:"CronJob", Namespace:"test-jobs", Name:"pi", UID:"24dbe785-7bfd-4336-b3c6-6c4d22089b1e", APIVersion:"batch/v1beta1", ResourceVersion:"1394", FieldPath:""}): type: 'Warning' reason: 'FailedNeedsStart' Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew
Name:                          pi
Namespace:                     test-jobs
Labels:                        run=pi
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  run=pi
... skipping 36 lines ...
                run=pi
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Tue, 24 Mar 2020 02:24:17 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=2d15a549-2ef7-42fc-9c9c-9d1e273bd9bf
           job-name=test-job
           run=pi
  Containers:
   pi:
... skipping 12 lines ...
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-lnqzr
E0324 02:24:17.966011   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
job.batch "test-job" deleted
E0324 02:24:18.056888   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
cronjob.batch "pi" deleted
E0324 02:24:18.155472   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-jobs" deleted
E0324 02:24:18.249496   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:18.967296   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:19.058095   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:19.156670   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:19.250757   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:19.968615   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:20.059417   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:20.157961   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:20.252131   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:20.969810   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:21.060691   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:21.159287   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:21.253314   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:24:21.673474   53292 namespace_controller.go:171] Namespace has been deleted test-service-accounts
E0324 02:24:21.971210   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:22.061947   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:22.160530   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:22.254679   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:22.972468   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:23.063424   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:23.161786   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:23.255621   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_create_job_tests
Running command: run_create_job_tests

+++ Running case: test-cmd.run_create_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 5 lines ...
job.batch/test-job created
create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
(Bjob.batch "test-job" deleted
I0324 02:24:23.833240   53292 event.go:274] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1585016663-13211", Name:"test-job-pi", UID:"9521f8ba-cfb7-4514-8b22-688823fa5362", APIVersion:"batch/v1", ResourceVersion:"1423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-nclrl
job.batch/test-job-pi created
create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
(BE0324 02:24:23.973701   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
job.batch "test-job-pi" deleted
E0324 02:24:24.064613   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/test-pi created
E0324 02:24:24.163143   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:24:24.165757   53292 event.go:274] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1585016663-13211", Name:"my-pi", UID:"7ea645ab-38af-4aa2-9e4e-6b734462b64a", APIVersion:"batch/v1", ResourceVersion:"1431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-fqqbr
job.batch/my-pi created
E0324 02:24:24.256837   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:[perl -Mbignum=bpi -wle print bpi(10)]
has:perl -Mbignum=bpi -wle print bpi(10)
job.batch "my-pi" deleted
cronjob.batch "test-pi" deleted
+++ exit code: 0
... skipping 7 lines ...
namespace/namespace-1585016664-995 created
Context "test" modified.
+++ [0324 02:24:24] Testing pod templates
core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0324 02:24:24.874783   49749 controller.go:606] quota admission added evaluator for: podtemplates
podtemplate/nginx created
E0324 02:24:24.975356   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE0324 02:24:25.065785   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
NAME    CONTAINERS   IMAGES   POD LABELS
nginx   nginx        nginx    name=nginx
E0324 02:24:25.164807   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:25.258413   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bpodtemplate "nginx" deleted
core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ exit code: 0
Recording: run_service_tests
Running command: run_service_tests

+++ Running case: test-cmd.run_service_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_service_tests
Context "test" modified.
+++ [0324 02:24:25] Testing kubectl(v1:services)
core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0324 02:24:25.976955   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:26.067389   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:26.166121   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master created
E0324 02:24:26.260082   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bcore.sh:864: Successful describe services redis-master:
Name:              redis-master
Namespace:         default
Labels:            app=redis
                   role=master
... skipping 51 lines ...
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(B
E0324 02:24:26.978501   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe services:
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
... skipping 18 lines ...
IP:                10.0.0.144
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(BE0324 02:24:27.069205   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
... skipping 18 lines ...
IP:                10.0.0.144
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(BE0324 02:24:27.167958   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:27.261500   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
... skipping 91 lines ...
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
service/redis-master selector updated
core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
(BE0324 02:24:27.979995   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master selector updated
E0324 02:24:28.070536   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:28.169329   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(BE0324 02:24:28.263136   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:24:28.270080   53292 namespace_controller.go:171] Namespace has been deleted test-jobs
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-03-24T02:24:26Z"
  labels:
... skipping 14 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0324 02:24:28.981125   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master created
E0324 02:24:29.071731   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(BE0324 02:24:29.170541   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(BE0324 02:24:29.264256   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/service-v1-test created
core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(Bservice/service-v1-test replaced
core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(Bservice "redis-master" deleted
service "service-v1-test" deleted
core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0324 02:24:29.982401   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0324 02:24:30.072893   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:30.171688   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master created
E0324 02:24:30.265395   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-slave created
core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(BSuccessful
message:NAME           RSRC
kubernetes     145
redis-master   1466
... skipping 2 lines ...
core.sh:979: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(Bservice "redis-master" deleted
service "redis-slave" deleted
core.sh:986: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:990: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/beep-boop created
E0324 02:24:30.983530   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:994: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
(BE0324 02:24:31.074054   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:998: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
(BE0324 02:24:31.172919   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "beep-boop" deleted
E0324 02:24:31.266575   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1009: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bkubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
I0324 02:24:31.475991   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"7345d204-7989-4968-8099-6f49864a8263", APIVersion:"apps/v1", ResourceVersion:"1482", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
I0324 02:24:31.480794   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"db7e6ec2-266f-49f7-8652-348b4ba5cb5d", APIVersion:"apps/v1", ResourceVersion:"1483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-fxn9z
I0324 02:24:31.483432   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"db7e6ec2-266f-49f7-8652-348b4ba5cb5d", APIVersion:"apps/v1", ResourceVersion:"1483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-7wzx9
... skipping 2 lines ...
core.sh:1013: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
(Bcore.sh:1014: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
(Bservice/exposemetadata exposed
core.sh:1020: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
(Bservice "exposemetadata" deleted
service "testmetadata" deleted
E0324 02:24:31.984629   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "testmetadata" deleted
+++ exit code: 0
Recording: run_daemonset_tests
Running command: run_daemonset_tests

+++ Running case: test-cmd.run_daemonset_tests 
E0324 02:24:32.075323   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_daemonset_tests
+++ [0324 02:24:32] Creating namespace namespace-1585016672-6530
namespace/namespace-1585016672-6530 created
E0324 02:24:32.173982   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0324 02:24:32] Testing kubectl(v1:daemonsets)
E0324 02:24:32.267793   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0324 02:24:32.464452   49749 controller.go:606] quota admission added evaluator for: daemonsets.apps
daemonset.apps/bind created
I0324 02:24:32.472978   49749 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind configured
apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind image updated
apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
(BE0324 02:24:32.985875   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind env updated
E0324 02:24:33.076252   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
(BE0324 02:24:33.175147   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind resource requirements updated
E0324 02:24:33.269023   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
(Bdaemonset.apps/bind restarted
apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_daemonset_history_tests
... skipping 4 lines ...
+++ command: run_daemonset_history_tests
+++ [0324 02:24:33] Creating namespace namespace-1585016673-30434
namespace/namespace-1585016673-30434 created
Context "test" modified.
+++ [0324 02:24:33] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:33.987123   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind created
E0324 02:24:34.077398   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1585016673-30434"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(BE0324 02:24:34.176239   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind skipped rollback (current template already matches revision 1)
E0324 02:24:34.270281   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind configured
apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
... skipping 8 lines ...
    Port:	<none>
    Host Port:	<none>
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
 (dry run)
E0324 02:24:34.988221   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(BE0324 02:24:35.078551   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0324 02:24:35.177410   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(BE0324 02:24:35.271437   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind rolled back
E0324 02:24:35.339783   53292 daemon_controller.go:302] namespace-1585016673-30434/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1585016673-30434", SelfLink:"/apis/apps/v1/namespaces/namespace-1585016673-30434/daemonsets/bind", UID:"e1778b02-9675-433f-a9f2-d17bd223e1da", ResourceVersion:"1547", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720613474, loc:(*time.Location)(0x6c023c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1585016673-30434\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0018288a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001feea88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000af8900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0018288e0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001d7c0b8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001feeadc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0324 02:24:35.862760   53292 daemon_controller.go:302] namespace-1585016673-30434/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1585016673-30434", SelfLink:"/apis/apps/v1/namespaces/namespace-1585016673-30434/daemonsets/bind", UID:"e1778b02-9675-433f-a9f2-d17bd223e1da", ResourceVersion:"1551", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720613474, loc:(*time.Location)(0x6c023c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1585016673-30434\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001ce15e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020d2948), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001df2180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001ce1600), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001d7c248)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0020d299c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(BE0324 02:24:35.989410   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0324 02:24:36.079694   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:99: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(BE0324 02:24:36.178684   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
Running command: run_rc_tests

+++ Running case: test-cmd.run_rc_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rc_tests
E0324 02:24:36.272646   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [0324 02:24:36] Creating namespace namespace-1585016676-25803
namespace/namespace-1585016676-25803 created
Context "test" modified.
+++ [0324 02:24:36] Testing kubectl(v1:replicationcontrollers)
core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
I0324 02:24:36.646640   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"cebb7d0d-8063-4f2d-ae94-4f602fe0115b", APIVersion:"v1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vdftz
I0324 02:24:36.651385   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"cebb7d0d-8063-4f2d-ae94-4f602fe0115b", APIVersion:"v1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-l7t7g
I0324 02:24:36.651687   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"cebb7d0d-8063-4f2d-ae94-4f602fe0115b", APIVersion:"v1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-sgj47
replicationcontroller "frontend" deleted
core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:36.990629   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0324 02:24:37.054871   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"5a62b411-66fd-4e58-9759-d1caf10b0d2f", APIVersion:"v1", ResourceVersion:"1575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jlp87
I0324 02:24:37.058288   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"5a62b411-66fd-4e58-9759-d1caf10b0d2f", APIVersion:"v1", ResourceVersion:"1575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-t7brn
I0324 02:24:37.058332   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"5a62b411-66fd-4e58-9759-d1caf10b0d2f", APIVersion:"v1", ResourceVersion:"1575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mt9q6
E0324 02:24:37.080592   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0324 02:24:37.179844   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:37.273807   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1061: Successful describe rc frontend:
Name:         frontend
Namespace:    namespace-1585016676-25803
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1585016676-25803
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1585016676-25803
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1585016676-25803
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1585016676-25803
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1585016676-25803
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1585016676-25803
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 3 lines ...
      cpu:     100m
      memory:  100Mi
    Environment:
      GET_HOSTS_FROM:  dns
    Mounts:            <none>
  Volumes:             <none>
(BE0324 02:24:37.991581   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1585016676-25803
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-jlp87
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-t7brn
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-mt9q6
(BE0324 02:24:38.081662   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1079: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
I0324 02:24:38.167043   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"5a62b411-66fd-4e58-9759-d1caf10b0d2f", APIVersion:"v1", ResourceVersion:"1585", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-jlp87
E0324 02:24:38.180664   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
(BE0324 02:24:38.275043   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1087: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0324 02:24:38.660222   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"5a62b411-66fd-4e58-9759-d1caf10b0d2f", APIVersion:"v1", ResourceVersion:"1591", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5ssgz
core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
I0324 02:24:38.907022   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"5a62b411-66fd-4e58-9759-d1caf10b0d2f", APIVersion:"v1", ResourceVersion:"1596", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-5ssgz
core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
(BE0324 02:24:38.992723   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller "frontend" deleted
E0324 02:24:39.082529   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:39.181788   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-master created
I0324 02:24:39.222333   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"redis-master", UID:"9091c7fa-0533-4a67-98e5-b3cba343b669", APIVersion:"v1", ResourceVersion:"1607", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-77c2k
E0324 02:24:39.276268   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-slave created
I0324 02:24:39.379296   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"redis-slave", UID:"72b877c0-fe58-497d-be57-9549e5c8ff39", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-wmljd
I0324 02:24:39.383541   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"redis-slave", UID:"72b877c0-fe58-497d-be57-9549e5c8ff39", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-dwvw6
replicationcontroller/redis-master scaled
I0324 02:24:39.462311   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"redis-master", UID:"9091c7fa-0533-4a67-98e5-b3cba343b669", APIVersion:"v1", ResourceVersion:"1620", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-t5k9j
replicationcontroller/redis-slave scaled
... skipping 11 lines ...
I0324 02:24:39.879766   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-6986c7bc94", UID:"659d666c-708c-4974-993d-1f624f7bfa89", APIVersion:"apps/v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-g7f79
I0324 02:24:39.880300   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-6986c7bc94", UID:"659d666c-708c-4974-993d-1f624f7bfa89", APIVersion:"apps/v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-qrjsh
deployment.apps/nginx-deployment scaled
I0324 02:24:39.960922   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment", UID:"984d4e6f-7735-463e-b037-6a73ec347eb5", APIVersion:"apps/v1", ResourceVersion:"1668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
I0324 02:24:39.968442   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-6986c7bc94", UID:"659d666c-708c-4974-993d-1f624f7bfa89", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-qrjsh
I0324 02:24:39.970158   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-6986c7bc94", UID:"659d666c-708c-4974-993d-1f624f7bfa89", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-g7f79
E0324 02:24:39.993616   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
(BE0324 02:24:40.083855   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx-deployment" deleted
E0324 02:24:40.182989   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
E0324 02:24:40.277439   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0324 02:24:40.546633   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment", UID:"b8d6150b-6843-4ec7-ad19-172b7e9cad8c", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0324 02:24:40.551478   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-6986c7bc94", UID:"e9da8e9f-a317-4b68-a2c5-96f58c46bfae", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-kc4df
I0324 02:24:40.554929   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-6986c7bc94", UID:"e9da8e9f-a317-4b68-a2c5-96f58c46bfae", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-vvxph
I0324 02:24:40.554964   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-6986c7bc94", UID:"e9da8e9f-a317-4b68-a2c5-96f58c46bfae", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-zbww8
core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
(Bservice/nginx-deployment exposed
core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
(Bdeployment.apps "nginx-deployment" deleted
service "nginx-deployment" deleted
E0324 02:24:40.994715   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0324 02:24:41.069620   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"e17ee0ba-ae8f-4c55-9376-0e388530ede2", APIVersion:"v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gn9pv
I0324 02:24:41.072622   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"e17ee0ba-ae8f-4c55-9376-0e388530ede2", APIVersion:"v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ctkrc
I0324 02:24:41.072729   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"e17ee0ba-ae8f-4c55-9376-0e388530ede2", APIVersion:"v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gmzqx
E0324 02:24:41.084758   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
(BE0324 02:24:41.184141   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend exposed
E0324 02:24:41.278652   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1161: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bservice/frontend-2 exposed
core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
(Bpod/valid-pod created
service/frontend-3 exposed
core.sh:1170: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
(Bservice/frontend-4 exposed
core.sh:1174: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
(BE0324 02:24:41.995889   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend-5 exposed
E0324 02:24:42.086054   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1178: Successful get service frontend-5 {{(index .spec.ports 0).port}}: 80
(BE0324 02:24:42.185320   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod "valid-pod" deleted
E0324 02:24:42.279935   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
Successful
message:service/etcd-server exposed
has:etcd-server exposed
core.sh:1208: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
(Bcore.sh:1209: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
(BE0324 02:24:42.997021   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "etcd-server" deleted
E0324 02:24:43.087364   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0324 02:24:43.186475   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller "frontend" deleted
E0324 02:24:43.281314   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
I0324 02:24:43.536007   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"291b535f-71e8-4acb-aa0d-8c88aac42718", APIVersion:"v1", ResourceVersion:"1785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dbs8w
I0324 02:24:43.538270   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"291b535f-71e8-4acb-aa0d-8c88aac42718", APIVersion:"v1", ResourceVersion:"1785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7qqb7
I0324 02:24:43.540576   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"291b535f-71e8-4acb-aa0d-8c88aac42718", APIVersion:"v1", ResourceVersion:"1785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fh8tp
replicationcontroller/redis-slave created
I0324 02:24:43.694519   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"redis-slave", UID:"041998aa-f654-4888-9212-94fa473731fb", APIVersion:"v1", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-bffzs
I0324 02:24:43.697166   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"redis-slave", UID:"041998aa-f654-4888-9212-94fa473731fb", APIVersion:"v1", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-2w9kp
core.sh:1228: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Bcore.sh:1232: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Breplicationcontroller "frontend" deleted
replicationcontroller "redis-slave" deleted
E0324 02:24:43.998209   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1236: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:44.088538   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1240: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:44.187706   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
E0324 02:24:44.282395   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0324 02:24:44.285059   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"f372ea33-e81b-435e-bb18-2a4d37ff72cc", APIVersion:"v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ltdnj
I0324 02:24:44.287945   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"f372ea33-e81b-435e-bb18-2a4d37ff72cc", APIVersion:"v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-sbqs2
I0324 02:24:44.288142   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585016676-25803", Name:"frontend", UID:"f372ea33-e81b-435e-bb18-2a4d37ff72cc", APIVersion:"v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rvrm2
core.sh:1243: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1246: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set


Examples:
  # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
  kubectl autoscale deployment foo --min=2 --max=10
  
... skipping 19 lines ...
Usage:
  kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).

replicationcontroller "frontend" deleted
E0324 02:24:44.999299   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1259: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:45.089842   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    name: nginx-deployment-resources
... skipping 22 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
E0324 02:24:45.188929   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
E0324 02:24:45.283505   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment-resources created
I0324 02:24:45.373966   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources", UID:"a446fc55-10b8-4f8a-a5fe-f4ebd4965f79", APIVersion:"apps/v1", ResourceVersion:"1834", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
I0324 02:24:45.378069   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources-67f8cfff5", UID:"5ce880e1-6fcb-4de6-895c-6e8d678d0ee8", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-4lpsz
I0324 02:24:45.380704   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources-67f8cfff5", UID:"5ce880e1-6fcb-4de6-895c-6e8d678d0ee8", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-vp6pt
I0324 02:24:45.383326   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources-67f8cfff5", UID:"5ce880e1-6fcb-4de6-895c-6e8d678d0ee8", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-bm9nb
core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1266: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1267: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0324 02:24:45.724119   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources", UID:"a446fc55-10b8-4f8a-a5fe-f4ebd4965f79", APIVersion:"apps/v1", ResourceVersion:"1849", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
I0324 02:24:45.726532   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources-55c547f795", UID:"54335d09-fb34-421e-88fb-382b322f8d8e", APIVersion:"apps/v1", ResourceVersion:"1850", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-l7c5b
core.sh:1270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1271: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
E0324 02:24:46.000494   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment-resources resource requirements updated
I0324 02:24:46.066388   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources", UID:"a446fc55-10b8-4f8a-a5fe-f4ebd4965f79", APIVersion:"apps/v1", ResourceVersion:"1859", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-55c547f795 to 0
I0324 02:24:46.071681   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources", UID:"a446fc55-10b8-4f8a-a5fe-f4ebd4965f79", APIVersion:"apps/v1", ResourceVersion:"1862", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
I0324 02:24:46.072871   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources-55c547f795", UID:"54335d09-fb34-421e-88fb-382b322f8d8e", APIVersion:"apps/v1", ResourceVersion:"1863", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-55c547f795-l7c5b
I0324 02:24:46.076045   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources-6d86564b45", UID:"d918baf9-123c-470d-b18a-7eaa46aeb055", APIVersion:"apps/v1", ResourceVersion:"1866", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-tz6rh
E0324 02:24:46.091064   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1276: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(BE0324 02:24:46.190140   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1277: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(BE0324 02:24:46.284614   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment-resources resource requirements updated
I0324 02:24:46.330592   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources", UID:"a446fc55-10b8-4f8a-a5fe-f4ebd4965f79", APIVersion:"apps/v1", ResourceVersion:"1880", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 2
I0324 02:24:46.335911   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources", UID:"a446fc55-10b8-4f8a-a5fe-f4ebd4965f79", APIVersion:"apps/v1", ResourceVersion:"1883", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c478d4fdb to 1
I0324 02:24:46.336223   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources-67f8cfff5", UID:"5ce880e1-6fcb-4de6-895c-6e8d678d0ee8", APIVersion:"apps/v1", ResourceVersion:"1884", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-4lpsz
I0324 02:24:46.340596   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016676-25803", Name:"nginx-deployment-resources-6c478d4fdb", UID:"a25eea1f-423e-4208-aebf-bd034a8a3c72", APIVersion:"apps/v1", ResourceVersion:"1887", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c478d4fdb-pk6s4
core.sh:1280: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 73 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
(BE0324 02:24:47.001579   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx-deployment-resources" deleted
+++ exit code: 0
E0324 02:24:47.092069   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_deployment_tests
Running command: run_deployment_tests

+++ Running case: test-cmd.run_deployment_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_deployment_tests
+++ [0324 02:24:47] Creating namespace namespace-1585016687-5594
E0324 02:24:47.191404   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1585016687-5594 created
Context "test" modified.
E0324 02:24:47.285401   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [0324 02:24:47] Testing deployments
deployment.apps/test-nginx-extensions created
I0324 02:24:47.361845   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"test-nginx-extensions", UID:"8f7b2eac-f8c3-4ead-83a0-f8b19a630b41", APIVersion:"apps/v1", ResourceVersion:"1915", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5559c76db7 to 1
I0324 02:24:47.369363   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"test-nginx-extensions-5559c76db7", UID:"c28a9a26-ecc8-4c2f-9d7a-3e4244bc38d6", APIVersion:"apps/v1", ResourceVersion:"1916", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5559c76db7-b2dwt
apps.sh:185: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx
(BSuccessful
... skipping 10 lines ...
(BSuccessful
message:10
has:10
Successful
message:apps/v1
has:apps/v1
E0324 02:24:48.002731   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:48.093182   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe rs:
Name:           test-nginx-apps-79b9bd9585
Namespace:      namespace-1585016687-5594
Selector:       app=test-nginx-apps,pod-template-hash=79b9bd9585
Labels:         app=test-nginx-apps
                pod-template-hash=79b9bd9585
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=79b9bd9585
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 3 lines ...
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: test-nginx-apps-79b9bd9585-vz6x5
(BE0324 02:24:48.192438   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe pods:
Name:           test-nginx-apps-79b9bd9585-vz6x5
Namespace:      namespace-1585016687-5594
Priority:       0
Node:           <none>
Labels:         app=test-nginx-apps
... skipping 12 lines ...
    Mounts:       <none>
Volumes:          <none>
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
(BE0324 02:24:48.286643   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "test-nginx-apps" deleted
apps.sh:214: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-with-command created
I0324 02:24:48.466545   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-with-command", UID:"529d5f75-bdd4-44b3-ae9b-c3e1d6df4b48", APIVersion:"apps/v1", ResourceVersion:"1944", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-757c6f58dd to 1
I0324 02:24:48.469082   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-with-command-757c6f58dd", UID:"afd04c25-403a-4f40-a367-947ca0eaf431", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-757c6f58dd-8b96h
apps.sh:218: Successful get deploy nginx-with-command {{(index .spec.template.spec.containers 0).name}}: nginx
(Bdeployment.apps "nginx-with-command" deleted
apps.sh:224: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/deployment-with-unixuserid created
I0324 02:24:48.862682   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"deployment-with-unixuserid", UID:"c2dc32ca-5c8e-4863-b5d4-b6f034dd283f", APIVersion:"apps/v1", ResourceVersion:"1958", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-8fcdfc94f to 1
I0324 02:24:48.866322   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"deployment-with-unixuserid-8fcdfc94f", UID:"9fca5a80-acfd-4a2c-8977-5636c94d4535", APIVersion:"apps/v1", ResourceVersion:"1959", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-8fcdfc94f-cwrgt
apps.sh:228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
(BE0324 02:24:49.003993   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "deployment-with-unixuserid" deleted
E0324 02:24:49.094180   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:235: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:49.193748   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0324 02:24:49.262198   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"eca4a0c9-bdc3-43e6-8279-54f0de12af9f", APIVersion:"apps/v1", ResourceVersion:"1972", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0324 02:24:49.266085   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-6986c7bc94", UID:"b6f6ab91-d471-4af6-9e1d-70f0d56c1b6f", APIVersion:"apps/v1", ResourceVersion:"1973", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-rz8hh
I0324 02:24:49.268367   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-6986c7bc94", UID:"b6f6ab91-d471-4af6-9e1d-70f0d56c1b6f", APIVersion:"apps/v1", ResourceVersion:"1973", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-64qgn
I0324 02:24:49.268523   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-6986c7bc94", UID:"b6f6ab91-d471-4af6-9e1d-70f0d56c1b6f", APIVersion:"apps/v1", ResourceVersion:"1973", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-89ntx
E0324 02:24:49.287473   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:239: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3
(Bdeployment.apps "nginx-deployment" deleted
apps.sh:242: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:246: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:247: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I0324 02:24:49.756741   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"99b2b9b2-dd99-4242-b0d5-b377bde402cb", APIVersion:"apps/v1", ResourceVersion:"1995", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7f6fc565b9 to 1
I0324 02:24:49.758733   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-7f6fc565b9", UID:"42932488-88a1-4a46-8a5b-bef5cdf6d9f2", APIVersion:"apps/v1", ResourceVersion:"1996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7f6fc565b9-zkgg4
apps.sh:251: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
(Bdeployment.apps "nginx-deployment" deleted
E0324 02:24:50.005218   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:256: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:50.095422   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:257: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
(BE0324 02:24:50.194844   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "nginx-deployment-7f6fc565b9" deleted
E0324 02:24:50.288607   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I0324 02:24:50.513298   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"5ace44ac-59a1-407d-b13c-505f2652d5db", APIVersion:"apps/v1", ResourceVersion:"2013", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0324 02:24:50.519483   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-6986c7bc94", UID:"79d09d22-f540-4d00-adcf-834394a56222", APIVersion:"apps/v1", ResourceVersion:"2014", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-vshn4
I0324 02:24:50.521752   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-6986c7bc94", UID:"79d09d22-f540-4d00-adcf-834394a56222", APIVersion:"apps/v1", ResourceVersion:"2014", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-9m7gv
I0324 02:24:50.527647   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-6986c7bc94", UID:"79d09d22-f540-4d00-adcf-834394a56222", APIVersion:"apps/v1", ResourceVersion:"2014", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-2s4mv
apps.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(Bhorizontalpodautoscaler.autoscaling/nginx-deployment autoscaled
apps.sh:271: Successful get hpa nginx-deployment {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "nginx-deployment" deleted
deployment.apps "nginx-deployment" deleted
E0324 02:24:51.006313   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:279: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:51.096776   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:51.196011   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx created
I0324 02:24:51.210949   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx", UID:"b6f13d09-c729-4d1e-8c98-88fa51189ae8", APIVersion:"apps/v1", ResourceVersion:"2037", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I0324 02:24:51.214082   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-f87d999f7", UID:"8cc7ff61-15dd-4198-b4b9-eda093742583", APIVersion:"apps/v1", ResourceVersion:"2038", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-jp99d
I0324 02:24:51.216233   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-f87d999f7", UID:"8cc7ff61-15dd-4198-b4b9-eda093742583", APIVersion:"apps/v1", ResourceVersion:"2038", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-29sdp
I0324 02:24:51.217210   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-f87d999f7", UID:"8cc7ff61-15dd-4198-b4b9-eda093742583", APIVersion:"apps/v1", ResourceVersion:"2038", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-pvbws
E0324 02:24:51.289897   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:283: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bapps.sh:284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx skipped rollback (current template already matches revision 1)
apps.sh:287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/nginx configured
I0324 02:24:51.716874   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx", UID:"b6f13d09-c729-4d1e-8c98-88fa51189ae8", APIVersion:"apps/v1", ResourceVersion:"2052", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-78487f9fd7 to 1
I0324 02:24:51.720109   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-78487f9fd7", UID:"7da1e639-1de8-428e-b49a-0f3adc7b92df", APIVersion:"apps/v1", ResourceVersion:"2053", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-78487f9fd7-xnqsl
apps.sh:290: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
apps.sh:293: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(BE0324 02:24:52.007606   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx rolled back
E0324 02:24:52.097598   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:52.197153   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:52.291223   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:53.008836   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:53.098775   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:297: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0324 02:24:53.198394   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: unable to find specified revision 1000000 in history
E0324 02:24:53.292405   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
E0324 02:24:54.010070   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:54.099945   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:54.199703   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:54.293533   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:304: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
E0324 02:24:55.014841   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:55.101124   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
    deployment.kubernetes.io/revision-history: 1,3
E0324 02:24:55.200964   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:55.294765   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0324 02:24:55.404666   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx", UID:"b6f13d09-c729-4d1e-8c98-88fa51189ae8", APIVersion:"apps/v1", ResourceVersion:"2082", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-78487f9fd7 to 0
I0324 02:24:55.409608   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx", UID:"b6f13d09-c729-4d1e-8c98-88fa51189ae8", APIVersion:"apps/v1", ResourceVersion:"2085", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6cf89f85d5 to 1
I0324 02:24:55.410122   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-78487f9fd7", UID:"7da1e639-1de8-428e-b49a-0f3adc7b92df", APIVersion:"apps/v1", ResourceVersion:"2086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-78487f9fd7-xnqsl
I0324 02:24:55.411864   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-6cf89f85d5", UID:"ddd91177-904e-4892-b265-d8f03287ed7b", APIVersion:"apps/v1", ResourceVersion:"2089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6cf89f85d5-kxpqk
E0324 02:24:56.016084   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:56.102361   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:56.202120   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:56.295921   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "3"
... skipping 56 lines ...
I0324 02:24:56.723256   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx2-57b7865cd9", UID:"8c7358f0-4265-4336-9fca-3da9f15d26e6", APIVersion:"apps/v1", ResourceVersion:"2105", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-nw5hg
I0324 02:24:56.726005   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx2-57b7865cd9", UID:"8c7358f0-4265-4336-9fca-3da9f15d26e6", APIVersion:"apps/v1", ResourceVersion:"2105", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-zhhvf
I0324 02:24:56.726334   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx2-57b7865cd9", UID:"8c7358f0-4265-4336-9fca-3da9f15d26e6", APIVersion:"apps/v1", ResourceVersion:"2105", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-kdfjz
deployment.apps "nginx2" deleted
deployment.apps "nginx" deleted
apps.sh:334: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:57.017279   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:57.103615   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0324 02:24:57.119704   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"b18cf810-4dd3-4f0c-97e4-733ed69af4c5", APIVersion:"apps/v1", ResourceVersion:"2138", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
I0324 02:24:57.122512   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-598d4d68b4", UID:"d4277a54-4840-4af5-a4d2-18de0916f984", APIVersion:"apps/v1", ResourceVersion:"2139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-mc7ck
I0324 02:24:57.127138   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-598d4d68b4", UID:"d4277a54-4840-4af5-a4d2-18de0916f984", APIVersion:"apps/v1", ResourceVersion:"2139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-bzl7d
I0324 02:24:57.127316   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-598d4d68b4", UID:"d4277a54-4840-4af5-a4d2-18de0916f984", APIVersion:"apps/v1", ResourceVersion:"2139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-qpz8c
E0324 02:24:57.203097   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:337: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(BE0324 02:24:57.297009   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:339: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0324 02:24:57.463903   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"b18cf810-4dd3-4f0c-97e4-733ed69af4c5", APIVersion:"apps/v1", ResourceVersion:"2152", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
I0324 02:24:57.466794   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-59df9b5f5b", UID:"60cfa8ae-22c4-4f5a-a120-1c5d90e3dc88", APIVersion:"apps/v1", ResourceVersion:"2154", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-blt7t
apps.sh:342: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:349: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(BE0324 02:24:58.018401   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment image updated
E0324 02:24:58.104794   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(BE0324 02:24:58.204359   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:353: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(BE0324 02:24:58.298242   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:357: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0324 02:24:58.568455   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"b18cf810-4dd3-4f0c-97e4-733ed69af4c5", APIVersion:"apps/v1", ResourceVersion:"2171", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
I0324 02:24:58.573487   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"b18cf810-4dd3-4f0c-97e4-733ed69af4c5", APIVersion:"apps/v1", ResourceVersion:"2173", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7d758dbc54 to 1
I0324 02:24:58.576549   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-598d4d68b4", UID:"d4277a54-4840-4af5-a4d2-18de0916f984", APIVersion:"apps/v1", ResourceVersion:"2175", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-mc7ck
I0324 02:24:58.577365   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-7d758dbc54", UID:"ab0aee3c-a7c5-4928-9ba3-90d028bf5732", APIVersion:"apps/v1", ResourceVersion:"2177", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7d758dbc54-2clsp
apps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:361: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:364: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0324 02:24:59.019642   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx-deployment" deleted
E0324 02:24:59.105953   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:371: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:24:59.205712   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:24:59.299435   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0324 02:24:59.310873   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"306d2eba-66e4-44bc-bfe3-38e6d6a22c0d", APIVersion:"apps/v1", ResourceVersion:"2204", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
I0324 02:24:59.314419   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-598d4d68b4", UID:"1e9b16a4-4e8f-4588-88a2-3abb259f619a", APIVersion:"apps/v1", ResourceVersion:"2205", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-fnxwg
I0324 02:24:59.317067   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-598d4d68b4", UID:"1e9b16a4-4e8f-4588-88a2-3abb259f619a", APIVersion:"apps/v1", ResourceVersion:"2205", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-24fxh
I0324 02:24:59.317763   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-598d4d68b4", UID:"1e9b16a4-4e8f-4588-88a2-3abb259f619a", APIVersion:"apps/v1", ResourceVersion:"2205", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-mjb6q
I0324 02:24:59.446790   53292 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1585016676-25803
... skipping 2 lines ...
apps.sh:376: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(Bapps.sh:378: Successful get configmaps/test-set-env-config {{.metadata.name}}: test-set-env-config
(Bapps.sh:379: Successful get secret {{range.items}}{{.metadata.name}}:{{end}}: test-set-env-secret:
(Bdeployment.apps/nginx-deployment env updated
I0324 02:24:59.971121   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"306d2eba-66e4-44bc-bfe3-38e6d6a22c0d", APIVersion:"apps/v1", ResourceVersion:"2221", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6b9f7756b4 to 1
I0324 02:24:59.974688   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-6b9f7756b4", UID:"c8fcbfed-2141-4c72-86c8-6a0f9c79235f", APIVersion:"apps/v1", ResourceVersion:"2222", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6b9f7756b4-7sgm2
E0324 02:25:00.020848   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:383: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
(BE0324 02:25:00.107107   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:385: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
(BE0324 02:25:00.206934   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment env updated
I0324 02:25:00.244743   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"306d2eba-66e4-44bc-bfe3-38e6d6a22c0d", APIVersion:"apps/v1", ResourceVersion:"2231", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
I0324 02:25:00.249366   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-598d4d68b4", UID:"1e9b16a4-4e8f-4588-88a2-3abb259f619a", APIVersion:"apps/v1", ResourceVersion:"2235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-fnxwg
I0324 02:25:00.250575   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"306d2eba-66e4-44bc-bfe3-38e6d6a22c0d", APIVersion:"apps/v1", ResourceVersion:"2233", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-754bf964c8 to 1
I0324 02:25:00.254629   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-754bf964c8", UID:"9092da0e-b1d8-41e3-8257-27665efea1a3", APIVersion:"apps/v1", ResourceVersion:"2239", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-754bf964c8-zdpzs
E0324 02:25:00.300507   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:389: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2
(Bdeployment.apps/nginx-deployment env updated
I0324 02:25:00.428070   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"306d2eba-66e4-44bc-bfe3-38e6d6a22c0d", APIVersion:"apps/v1", ResourceVersion:"2251", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 1
I0324 02:25:00.435193   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-598d4d68b4", UID:"1e9b16a4-4e8f-4588-88a2-3abb259f619a", APIVersion:"apps/v1", ResourceVersion:"2255", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-24fxh
I0324 02:25:00.436377   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"306d2eba-66e4-44bc-bfe3-38e6d6a22c0d", APIVersion:"apps/v1", ResourceVersion:"2254", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-c6d5c5c7b to 1
I0324 02:25:00.441072   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-c6d5c5c7b", UID:"c68aaacb-acc8-4ec9-90e9-bd79bca63eb4", APIVersion:"apps/v1", ResourceVersion:"2259", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-c6d5c5c7b-d6nfp
... skipping 6 lines ...
I0324 02:25:00.672073   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"306d2eba-66e4-44bc-bfe3-38e6d6a22c0d", APIVersion:"apps/v1", ResourceVersion:"2291", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6b9f7756b4 to 0
deployment.apps/nginx-deployment env updated
deployment.apps/nginx-deployment env updated
I0324 02:25:00.845261   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-6b9f7756b4", UID:"c8fcbfed-2141-4c72-86c8-6a0f9c79235f", APIVersion:"apps/v1", ResourceVersion:"2295", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6b9f7756b4-7sgm2
I0324 02:25:00.871540   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment", UID:"306d2eba-66e4-44bc-bfe3-38e6d6a22c0d", APIVersion:"apps/v1", ResourceVersion:"2299", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-868b664cb5 to 1
deployment.apps "nginx-deployment" deleted
E0324 02:25:00.942968   53292 replica_set.go:450] Sync "namespace-1585016687-5594/nginx-deployment-5958f7687" failed with replicasets.apps "nginx-deployment-5958f7687" not found
configmap "test-set-env-config" deleted
E0324 02:25:01.021979   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-set-env-secret" deleted
+++ exit code: 0
I0324 02:25:01.093509   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016687-5594", Name:"nginx-deployment-868b664cb5", UID:"c087fd7c-072f-4479-830e-21f9767c061a", APIVersion:"apps/v1", ResourceVersion:"2302", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-868b664cb5-9xhw4
E0324 02:25:01.107899   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_rs_tests
Running command: run_rs_tests

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
E0324 02:25:01.142313   53292 replica_set.go:450] Sync "namespace-1585016687-5594/nginx-deployment-598d4d68b4" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-598d4d68b4": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1585016687-5594/nginx-deployment-598d4d68b4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1e9b16a4-4e8f-4588-88a2-3abb259f619a, UID in object meta: 
+++ command: run_rs_tests
+++ [0324 02:25:01] Creating namespace namespace-1585016701-30531
E0324 02:25:01.191969   53292 replica_set.go:450] Sync "namespace-1585016687-5594/nginx-deployment-6b9f7756b4" failed with replicasets.apps "nginx-deployment-6b9f7756b4" not found
E0324 02:25:01.207996   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1585016701-30531 created
Context "test" modified.
+++ [0324 02:25:01] Testing kubectl(v1:replicasets)
E0324 02:25:01.301571   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:25:01.341953   53292 replica_set.go:450] Sync "namespace-1585016687-5594/nginx-deployment-868b664cb5" failed with replicasets.apps "nginx-deployment-868b664cb5" not found
apps.sh:511: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0324 02:25:01.526399   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"a04df146-91a0-44c5-9f48-df941c5f9d7d", APIVersion:"apps/v1", ResourceVersion:"2328", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qrkhf
I0324 02:25:01.529053   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"a04df146-91a0-44c5-9f48-df941c5f9d7d", APIVersion:"apps/v1", ResourceVersion:"2328", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cjh8j
+++ [0324 02:25:01] Deleting rs
I0324 02:25:01.542575   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"a04df146-91a0-44c5-9f48-df941c5f9d7d", APIVersion:"apps/v1", ResourceVersion:"2328", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p4zt4
replicaset.apps "frontend" deleted
apps.sh:517: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:25:01.741820   53292 replica_set.go:450] Sync "namespace-1585016701-30531/frontend" failed with replicasets.apps "frontend" not found
apps.sh:521: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend-no-cascade created
I0324 02:25:01.933636   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend-no-cascade", UID:"19dac794-890c-49f1-a85f-e3865cf7e9d8", APIVersion:"apps/v1", ResourceVersion:"2342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-ccrkp
I0324 02:25:01.936219   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend-no-cascade", UID:"19dac794-890c-49f1-a85f-e3865cf7e9d8", APIVersion:"apps/v1", ResourceVersion:"2342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-hwmqd
I0324 02:25:01.942299   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend-no-cascade", UID:"19dac794-890c-49f1-a85f-e3865cf7e9d8", APIVersion:"apps/v1", ResourceVersion:"2342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-lcdsl
E0324 02:25:02.023116   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:527: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0324 02:25:02] Deleting rs
replicaset.apps "frontend-no-cascade" deleted
E0324 02:25:02.108803   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:25:02.191454   53292 replica_set.go:450] Sync "namespace-1585016701-30531/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
apps.sh:531: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:25:02.208905   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:533: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(BE0324 02:25:02.302627   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod "frontend-no-cascade-ccrkp" deleted
pod "frontend-no-cascade-hwmqd" deleted
pod "frontend-no-cascade-lcdsl" deleted
apps.sh:536: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:540: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
... skipping 6 lines ...
Namespace:    namespace-1585016701-30531
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1585016701-30531
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-q2727
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-prpbw
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-r9dxd
(B
E0324 02:25:03.024159   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:550: Successful describe
Name:         frontend
Namespace:    namespace-1585016701-30531
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 4 lines ...
      memory:  100Mi
    Environment:
      GET_HOSTS_FROM:  dns
    Mounts:            <none>
  Volumes:             <none>
(B
E0324 02:25:03.109917   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:552: Successful describe
E0324 02:25:03.209925   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Name:         frontend
Namespace:    namespace-1585016701-30531
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-q2727
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-prpbw
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-r9dxd
(B
E0324 02:25:03.303629   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe rs:
Name:         frontend
Namespace:    namespace-1585016701-30531
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1585016701-30531
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1585016701-30531
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1585016701-30531
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 99 lines ...
Tolerations:           <none>
Events:                <none>
(Bapps.sh:566: Successful get rs frontend {{.spec.replicas}}: 3
(Breplicaset.apps/frontend scaled
I0324 02:25:03.905733   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"bbe6593c-70db-44a1-a919-2f3e7f03a7fc", APIVersion:"apps/v1", ResourceVersion:"2371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-q2727
apps.sh:570: Successful get rs frontend {{.spec.replicas}}: 2
(BE0324 02:25:04.025518   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:25:04.111179   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/scale-1 created
I0324 02:25:04.141777   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016701-30531", Name:"scale-1", UID:"20cbc9a7-0d89-4dd2-aaf9-79b5938f255d", APIVersion:"apps/v1", ResourceVersion:"2377", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 1
I0324 02:25:04.143834   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"scale-1-5c5565bcd9", UID:"62ec808d-4c18-431d-8462-5e4a171439f4", APIVersion:"apps/v1", ResourceVersion:"2378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-6dm5q
E0324 02:25:04.211127   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/scale-2 created
I0324 02:25:04.300989   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016701-30531", Name:"scale-2", UID:"b55e6f45-fd40-4b9b-a5e2-2f56531d5fe6", APIVersion:"apps/v1", ResourceVersion:"2387", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 1
I0324 02:25:04.303398   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"scale-2-5c5565bcd9", UID:"9d87ede8-6e19-4e07-904c-4ab242c21328", APIVersion:"apps/v1", ResourceVersion:"2388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-v5c42
E0324 02:25:04.305235   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/scale-3 created
I0324 02:25:04.474655   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016701-30531", Name:"scale-3", UID:"ba78ffd6-bc24-45c1-9600-30b156e8e937", APIVersion:"apps/v1", ResourceVersion:"2397", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 1
I0324 02:25:04.478612   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"scale-3-5c5565bcd9", UID:"93f0ab71-c48c-4602-9f75-d2e628501f51", APIVersion:"apps/v1", ResourceVersion:"2398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-7sc26
apps.sh:576: Successful get deploy scale-1 {{.spec.replicas}}: 1
(Bapps.sh:577: Successful get deploy scale-2 {{.spec.replicas}}: 1
(Bapps.sh:578: Successful get deploy scale-3 {{.spec.replicas}}: 1
... skipping 2 lines ...
deployment.apps/scale-2 scaled
I0324 02:25:04.820212   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"scale-1-5c5565bcd9", UID:"62ec808d-4c18-431d-8462-5e4a171439f4", APIVersion:"apps/v1", ResourceVersion:"2409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-frn7j
I0324 02:25:04.821152   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016701-30531", Name:"scale-2", UID:"b55e6f45-fd40-4b9b-a5e2-2f56531d5fe6", APIVersion:"apps/v1", ResourceVersion:"2410", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 2
I0324 02:25:04.825260   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"scale-2-5c5565bcd9", UID:"9d87ede8-6e19-4e07-904c-4ab242c21328", APIVersion:"apps/v1", ResourceVersion:"2414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-m6bcd
apps.sh:581: Successful get deploy scale-1 {{.spec.replicas}}: 2
(Bapps.sh:582: Successful get deploy scale-2 {{.spec.replicas}}: 2
(BE0324 02:25:05.026672   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:583: Successful get deploy scale-3 {{.spec.replicas}}: 1
(BE0324 02:25:05.112448   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/scale-1 scaled
I0324 02:25:05.149691   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016701-30531", Name:"scale-1", UID:"20cbc9a7-0d89-4dd2-aaf9-79b5938f255d", APIVersion:"apps/v1", ResourceVersion:"2428", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 3
deployment.apps/scale-2 scaled
I0324 02:25:05.152779   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"scale-1-5c5565bcd9", UID:"62ec808d-4c18-431d-8462-5e4a171439f4", APIVersion:"apps/v1", ResourceVersion:"2429", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-dlxqg
I0324 02:25:05.154629   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016701-30531", Name:"scale-2", UID:"b55e6f45-fd40-4b9b-a5e2-2f56531d5fe6", APIVersion:"apps/v1", ResourceVersion:"2430", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 3
deployment.apps/scale-3 scaled
I0324 02:25:05.159485   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"scale-2-5c5565bcd9", UID:"9d87ede8-6e19-4e07-904c-4ab242c21328", APIVersion:"apps/v1", ResourceVersion:"2434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-dz7p8
I0324 02:25:05.160104   53292 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585016701-30531", Name:"scale-3", UID:"ba78ffd6-bc24-45c1-9600-30b156e8e937", APIVersion:"apps/v1", ResourceVersion:"2435", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 3
I0324 02:25:05.164016   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"scale-3-5c5565bcd9", UID:"93f0ab71-c48c-4602-9f75-d2e628501f51", APIVersion:"apps/v1", ResourceVersion:"2441", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-8p5zf
I0324 02:25:05.167711   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"scale-3-5c5565bcd9", UID:"93f0ab71-c48c-4602-9f75-d2e628501f51", APIVersion:"apps/v1", ResourceVersion:"2441", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-t76hg
E0324 02:25:05.212156   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:586: Successful get deploy scale-1 {{.spec.replicas}}: 3
(BE0324 02:25:05.306310   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:587: Successful get deploy scale-2 {{.spec.replicas}}: 3
(Bapps.sh:588: Successful get deploy scale-3 {{.spec.replicas}}: 3
(Breplicaset.apps "frontend" deleted
deployment.apps "scale-1" deleted
deployment.apps "scale-2" deleted
deployment.apps "scale-3" deleted
E0324 02:25:05.591487   53292 replica_set.go:450] Sync "namespace-1585016701-30531/scale-3-5c5565bcd9" failed with replicasets.apps "scale-3-5c5565bcd9" not found
E0324 02:25:05.641855   53292 replica_set.go:450] Sync "namespace-1585016701-30531/scale-2-5c5565bcd9" failed with replicasets.apps "scale-2-5c5565bcd9" not found
I0324 02:25:05.708722   53292 horizontal.go:341] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1585016687-5594
replicaset.apps/frontend created
I0324 02:25:05.742196   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"5baca87c-0007-4369-b041-24e96f81b084", APIVersion:"apps/v1", ResourceVersion:"2488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gx2zz
I0324 02:25:05.792937   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"5baca87c-0007-4369-b041-24e96f81b084", APIVersion:"apps/v1", ResourceVersion:"2488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hljbq
apps.sh:596: Successful get rs frontend {{.spec.replicas}}: 3
(BI0324 02:25:05.843107   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"5baca87c-0007-4369-b041-24e96f81b084", APIVersion:"apps/v1", ResourceVersion:"2488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-255jt
service/frontend exposed
apps.sh:600: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BE0324 02:25:06.027945   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend-2 exposed
E0324 02:25:06.113593   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:604: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
(BE0324 02:25:06.213438   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "frontend" deleted
service "frontend-2" deleted
E0324 02:25:06.307553   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:610: Successful get rs frontend {{.metadata.generation}}: 1
(Breplicaset.apps/frontend image updated
apps.sh:612: Successful get rs frontend {{.metadata.generation}}: 2
(Breplicaset.apps/frontend env updated
apps.sh:614: Successful get rs frontend {{.metadata.generation}}: 3
(Breplicaset.apps/frontend resource requirements updated
apps.sh:616: Successful get rs frontend {{.metadata.generation}}: 4
(Bapps.sh:620: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0324 02:25:07.028974   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "frontend" deleted
E0324 02:25:07.114564   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:624: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:628: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:25:07.214578   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0324 02:25:07.308685   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0324 02:25:07.353934   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"cf8059e3-2125-4565-ae05-72ff0866385c", APIVersion:"apps/v1", ResourceVersion:"2522", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7sll6
I0324 02:25:07.356281   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"cf8059e3-2125-4565-ae05-72ff0866385c", APIVersion:"apps/v1", ResourceVersion:"2522", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8rqwq
I0324 02:25:07.357264   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"cf8059e3-2125-4565-ae05-72ff0866385c", APIVersion:"apps/v1", ResourceVersion:"2522", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7t44x
replicaset.apps/redis-slave created
I0324 02:25:07.511148   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"redis-slave", UID:"981b26d0-4f96-4d77-a216-50145e77b521", APIVersion:"apps/v1", ResourceVersion:"2532", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-tfqxm
I0324 02:25:07.513399   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"redis-slave", UID:"981b26d0-4f96-4d77-a216-50145e77b521", APIVersion:"apps/v1", ResourceVersion:"2532", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-v4dq9
apps.sh:633: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Bapps.sh:637: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Breplicaset.apps "frontend" deleted
replicaset.apps "redis-slave" deleted
apps.sh:641: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:646: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:25:08.030145   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0324 02:25:08.097455   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"ae5b05be-9935-446f-8be1-365f3da96c14", APIVersion:"apps/v1", ResourceVersion:"2551", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bfjks
I0324 02:25:08.100207   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"ae5b05be-9935-446f-8be1-365f3da96c14", APIVersion:"apps/v1", ResourceVersion:"2551", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9dbkw
I0324 02:25:08.100254   53292 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585016701-30531", Name:"frontend", UID:"ae5b05be-9935-446f-8be1-365f3da96c14", APIVersion:"apps/v1", ResourceVersion:"2551", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5hqx4
E0324 02:25:08.115458   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:649: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0324 02:25:08.215696   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling/frontend autoscaled
E0324 02:25:08.309819   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:652: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:656: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set


Examples:
  # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
  kubectl autoscale deployment foo --min=2 --max=10
  
... skipping 30 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_stateful_set_tests
+++ [0324 02:25:08] Creating namespace namespace-1585016708-19578
namespace/namespace-1585016708-19578 created
Context "test" modified.
+++ [0324 02:25:09] Testing kubectl(v1:statefulsets)
E0324 02:25:09.031602   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:470: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0324 02:25:09.116492   53292 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource