This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2862 succeeded
Started2019-09-17 06:19
Elapsed28m39s
Revision
Buildergke-prow-ssd-pool-1a225945-mhkp
Refs master:be68d68b
82325:19e5c856
82574:b3ecf288
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/8d63f59a-0bdd-41c0-814a-b9a8ff61e353/targets/test'}}
pod0426ea9a-d913-11e9-8d3e-e6dd98504fa2
resultstorehttps://source.cloud.google.com/results/invocations/8d63f59a-0bdd-41c0-814a-b9a8ff61e353/targets/test
infra-commitedf7b7b9e
pod0426ea9a-d913-11e9-8d3e-e6dd98504fa2
repok8s.io/kubernetes
repo-commit355c5195c5f0f1d9875c1e399885df3341a8f79f
repos{u'k8s.io/kubernetes': u'master:be68d68b2b4b72f60a2bb4734051b2102068cd7f,82325:19e5c8565d444cbb81d554a69960d7144996b05a,82574:b3ecf288a5e68cf81b00d33b3cff4d9a54400b9c'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0917 06:43:01.951863  108684 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0917 06:43:01.951882  108684 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0917 06:43:01.951895  108684 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0917 06:43:01.951904  108684 master.go:259] Using reconciler: 
I0917 06:43:01.954625  108684 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.955127  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.955353  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.956462  108684 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0917 06:43:01.956500  108684 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.956900  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.956924  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.957009  108684 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0917 06:43:01.957920  108684 store.go:1342] Monitoring events count at <storage-prefix>//events
I0917 06:43:01.957955  108684 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.957979  108684 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0917 06:43:01.958106  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.958124  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.958332  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.959540  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.959581  108684 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0917 06:43:01.959618  108684 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.959669  108684 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0917 06:43:01.960416  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.960442  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.961433  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.961986  108684 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0917 06:43:01.962017  108684 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0917 06:43:01.962205  108684 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.962390  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.962417  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.963155  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.963341  108684 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0917 06:43:01.963392  108684 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0917 06:43:01.963518  108684 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.963648  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.963675  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.965088  108684 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0917 06:43:01.965129  108684 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0917 06:43:01.965268  108684 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.965390  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.965996  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.966022  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.968558  108684 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0917 06:43:01.968638  108684 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0917 06:43:01.968801  108684 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.968943  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.968963  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.969824  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.970885  108684 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0917 06:43:01.970933  108684 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0917 06:43:01.971108  108684 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.971255  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.971281  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.971751  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.972751  108684 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0917 06:43:01.972943  108684 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.973170  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.973195  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.973195  108684 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0917 06:43:01.974212  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.975005  108684 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0917 06:43:01.975177  108684 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0917 06:43:01.976378  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.977276  108684 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.977437  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.977531  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.978502  108684 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0917 06:43:01.978692  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.978858  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.978878  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.978978  108684 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0917 06:43:01.980573  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.981439  108684 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0917 06:43:01.981576  108684 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0917 06:43:01.982924  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.984302  108684 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.985279  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.985311  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.986664  108684 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0917 06:43:01.986883  108684 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.987041  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.987062  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.987160  108684 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0917 06:43:01.989325  108684 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0917 06:43:01.989380  108684 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.989460  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.989532  108684 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0917 06:43:01.989550  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.989569  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.989806  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.991920  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.993124  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.993154  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.994903  108684 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.995174  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:01.995282  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:01.998277  108684 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0917 06:43:01.998348  108684 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0917 06:43:01.998358  108684 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0917 06:43:01.999087  108684 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:01.999437  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:01.999991  108684 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.001222  108684 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.003056  108684 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.003962  108684 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.005022  108684 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.006133  108684 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.006290  108684 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.006575  108684 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.007123  108684 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.007742  108684 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.008088  108684 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.009677  108684 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.010062  108684 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.010643  108684 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.010928  108684 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.012203  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.012429  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.012566  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.012704  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.012936  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.013096  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.013290  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.014786  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.015124  108684 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.016172  108684 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.017800  108684 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.018257  108684 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.018805  108684 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.019823  108684 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.020743  108684 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.021888  108684 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.023256  108684 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.025080  108684 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.026246  108684 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.026973  108684 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.027213  108684 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0917 06:43:02.027320  108684 master.go:461] Enabling API group "authentication.k8s.io".
I0917 06:43:02.027403  108684 master.go:461] Enabling API group "authorization.k8s.io".
I0917 06:43:02.028374  108684 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.028706  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.028872  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.030095  108684 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0917 06:43:02.030191  108684 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0917 06:43:02.030293  108684 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.030480  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.030503  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.031390  108684 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0917 06:43:02.031603  108684 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.031674  108684 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0917 06:43:02.031829  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.031859  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.032472  108684 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0917 06:43:02.032494  108684 master.go:461] Enabling API group "autoscaling".
I0917 06:43:02.032607  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.032678  108684 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.032846  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.032867  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.032959  108684 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0917 06:43:02.033051  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.034579  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.034681  108684 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0917 06:43:02.034752  108684 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0917 06:43:02.034899  108684 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.035024  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.035042  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.036167  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.037214  108684 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0917 06:43:02.037240  108684 master.go:461] Enabling API group "batch".
I0917 06:43:02.037416  108684 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.037530  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.037549  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.037636  108684 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0917 06:43:02.039216  108684 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0917 06:43:02.039250  108684 master.go:461] Enabling API group "certificates.k8s.io".
I0917 06:43:02.039303  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.039429  108684 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.039558  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.039579  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.039610  108684 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0917 06:43:02.041212  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.041226  108684 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0917 06:43:02.041341  108684 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0917 06:43:02.041440  108684 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.042021  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.042210  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.043122  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.044719  108684 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0917 06:43:02.044747  108684 master.go:461] Enabling API group "coordination.k8s.io".
I0917 06:43:02.044793  108684 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0917 06:43:02.046481  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.046897  108684 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0917 06:43:02.047139  108684 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.047419  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.047450  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.048476  108684 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0917 06:43:02.048596  108684 master.go:461] Enabling API group "extensions".
I0917 06:43:02.048542  108684 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0917 06:43:02.049911  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.051121  108684 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.051428  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.051527  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.052539  108684 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0917 06:43:02.052627  108684 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0917 06:43:02.052738  108684 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.053403  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.053435  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.053942  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.054287  108684 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0917 06:43:02.054312  108684 master.go:461] Enabling API group "networking.k8s.io".
I0917 06:43:02.054348  108684 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.054409  108684 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0917 06:43:02.054491  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.054511  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.055498  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.055922  108684 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0917 06:43:02.055940  108684 master.go:461] Enabling API group "node.k8s.io".
I0917 06:43:02.056373  108684 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.056528  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.056547  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.056697  108684 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0917 06:43:02.058018  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.058407  108684 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0917 06:43:02.058608  108684 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.058779  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.058800  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.058915  108684 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0917 06:43:02.060500  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.061213  108684 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0917 06:43:02.061459  108684 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0917 06:43:02.061564  108684 master.go:461] Enabling API group "policy".
I0917 06:43:02.061789  108684 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.061954  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.061978  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.062710  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.063888  108684 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0917 06:43:02.064083  108684 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.064229  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.064248  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.064360  108684 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0917 06:43:02.065329  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.066196  108684 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0917 06:43:02.066324  108684 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.066528  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.066982  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.066368  108684 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0917 06:43:02.068067  108684 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0917 06:43:02.068148  108684 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0917 06:43:02.068342  108684 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.068388  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.069591  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.069833  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.069861  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.071734  108684 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0917 06:43:02.071802  108684 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.071901  108684 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0917 06:43:02.073192  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.074087  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.074120  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.076749  108684 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0917 06:43:02.076802  108684 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0917 06:43:02.077487  108684 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.078171  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.078498  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.078587  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.079653  108684 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0917 06:43:02.079694  108684 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.080110  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.080115  108684 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0917 06:43:02.080307  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.081018  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.081528  108684 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0917 06:43:02.081693  108684 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0917 06:43:02.081960  108684 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.082559  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.082724  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.083462  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.085216  108684 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0917 06:43:02.085365  108684 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0917 06:43:02.085749  108684 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0917 06:43:02.087273  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.090921  108684 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.091250  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.091340  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.092597  108684 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0917 06:43:02.092649  108684 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0917 06:43:02.092888  108684 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.093174  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.093282  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.093723  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.094558  108684 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0917 06:43:02.094591  108684 master.go:461] Enabling API group "scheduling.k8s.io".
I0917 06:43:02.094732  108684 master.go:450] Skipping disabled API group "settings.k8s.io".
I0917 06:43:02.094936  108684 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.095120  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.095156  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.095272  108684 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0917 06:43:02.096876  108684 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0917 06:43:02.097015  108684 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0917 06:43:02.097516  108684 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.097750  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.097924  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.097546  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.098756  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.100242  108684 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0917 06:43:02.100306  108684 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.100425  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.100442  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.100543  108684 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0917 06:43:02.102283  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.103441  108684 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0917 06:43:02.103486  108684 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.103606  108684 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0917 06:43:02.103639  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.103817  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.105058  108684 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0917 06:43:02.105168  108684 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0917 06:43:02.105288  108684 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.105694  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.105723  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.106624  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.106901  108684 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0917 06:43:02.107221  108684 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.107408  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.107434  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.109279  108684 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0917 06:43:02.109645  108684 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0917 06:43:02.109723  108684 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0917 06:43:02.110095  108684 master.go:461] Enabling API group "storage.k8s.io".
I0917 06:43:02.110197  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.110491  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.110331  108684 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.110889  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.110910  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.111143  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.111550  108684 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0917 06:43:02.111796  108684 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.111930  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.111953  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.112074  108684 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0917 06:43:02.112992  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.114026  108684 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0917 06:43:02.114225  108684 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.114364  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.114391  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.114476  108684 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0917 06:43:02.115423  108684 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0917 06:43:02.115536  108684 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0917 06:43:02.115645  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.116099  108684 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.116545  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.116576  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.117157  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.117650  108684 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0917 06:43:02.117920  108684 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.117981  108684 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0917 06:43:02.118083  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.118102  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.119319  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.120255  108684 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0917 06:43:02.120281  108684 master.go:461] Enabling API group "apps".
I0917 06:43:02.120327  108684 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.120361  108684 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0917 06:43:02.120481  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.120499  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.121445  108684 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0917 06:43:02.121514  108684 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.121677  108684 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0917 06:43:02.121702  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.121721  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.121821  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.123469  108684 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0917 06:43:02.123505  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.123508  108684 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.123698  108684 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0917 06:43:02.123923  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.123942  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.124819  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.125124  108684 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0917 06:43:02.125149  108684 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.125272  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.125291  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.126065  108684 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0917 06:43:02.126089  108684 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0917 06:43:02.126103  108684 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0917 06:43:02.126118  108684 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.126437  108684 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0917 06:43:02.126470  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.126489  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:02.127724  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.128309  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.128439  108684 store.go:1342] Monitoring events count at <storage-prefix>//events
I0917 06:43:02.128464  108684 master.go:461] Enabling API group "events.k8s.io".
I0917 06:43:02.128624  108684 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0917 06:43:02.128718  108684 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.129020  108684 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.129338  108684 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.129484  108684 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.129685  108684 watch_cache.go:405] Replace watchCache (rev: 30144) 
I0917 06:43:02.129691  108684 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.129894  108684 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.130151  108684 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.130338  108684 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.130489  108684 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.130638  108684 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.131901  108684 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.132129  108684 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.132911  108684 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.133104  108684 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.134064  108684 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.134267  108684 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.134893  108684 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.135076  108684 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.135899  108684 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.136159  108684 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0917 06:43:02.136220  108684 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0917 06:43:02.137014  108684 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.137409  108684 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.137721  108684 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.138920  108684 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.139924  108684 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.140800  108684 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.141019  108684 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.143149  108684 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.144082  108684 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.144433  108684 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.145578  108684 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0917 06:43:02.145717  108684 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0917 06:43:02.146581  108684 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.147097  108684 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.148212  108684 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.149099  108684 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.149676  108684 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.150966  108684 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.152123  108684 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.152923  108684 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.153793  108684 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.154384  108684 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.155006  108684 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0917 06:43:02.155142  108684 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0917 06:43:02.155972  108684 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.156683  108684 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0917 06:43:02.156867  108684 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0917 06:43:02.157756  108684 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.159198  108684 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.159732  108684 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.160463  108684 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.161301  108684 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.162227  108684 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.163599  108684 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0917 06:43:02.163787  108684 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0917 06:43:02.165148  108684 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.167746  108684 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.168177  108684 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.169288  108684 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.169591  108684 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.169960  108684 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.170941  108684 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.171404  108684 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.190994  108684 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.192408  108684 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.193353  108684 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.193852  108684 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0917 06:43:02.193944  108684 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0917 06:43:02.193959  108684 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0917 06:43:02.194744  108684 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.196166  108684 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.197538  108684 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.198387  108684 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.200029  108684 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8570558d-d5e0-4e18-8870-4332560e09ac", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0917 06:43:02.205674  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.205709  108684 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0917 06:43:02.205724  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.205753  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.205775  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.205783  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.205813  108684 httplog.go:90] GET /healthz: (239.758µs) 0 [Go-http-client/1.1 127.0.0.1:59772]
I0917 06:43:02.206725  108684 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.491198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:02.209838  108684 httplog.go:90] GET /api/v1/services: (1.339807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:02.214452  108684 httplog.go:90] GET /api/v1/services: (1.2527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:02.217111  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.217141  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.217153  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.217163  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.217171  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.217195  108684 httplog.go:90] GET /healthz: (235.733µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:02.220075  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.765104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59772]
I0917 06:43:02.220297  108684 httplog.go:90] GET /api/v1/services: (1.532884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:02.220487  108684 httplog.go:90] GET /api/v1/services: (1.627671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.222557  108684 httplog.go:90] POST /api/v1/namespaces: (1.99836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59772]
I0917 06:43:02.225039  108684 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.090787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.227810  108684 httplog.go:90] POST /api/v1/namespaces: (1.844051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.229819  108684 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.397578ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.233369  108684 httplog.go:90] POST /api/v1/namespaces: (2.904128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.306878  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.306959  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.306972  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.306981  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.306990  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.307030  108684 httplog.go:90] GET /healthz: (322.604µs) 0 [Go-http-client/1.1 127.0.0.1:59774]
I0917 06:43:02.318680  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.318723  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.318735  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.318748  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.318772  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.318808  108684 httplog.go:90] GET /healthz: (323.83µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.407302  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.407328  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.407337  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.407343  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.407349  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.407370  108684 httplog.go:90] GET /healthz: (187.922µs) 0 [Go-http-client/1.1 127.0.0.1:59774]
I0917 06:43:02.419069  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.419098  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.419117  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.419127  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.419135  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.419180  108684 httplog.go:90] GET /healthz: (255.499µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.508250  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.508288  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.508302  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.508311  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.508319  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.508349  108684 httplog.go:90] GET /healthz: (231.29µs) 0 [Go-http-client/1.1 127.0.0.1:59774]
I0917 06:43:02.520713  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.520780  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.520794  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.520804  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.520825  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.520874  108684 httplog.go:90] GET /healthz: (307.299µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.606864  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.606900  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.606920  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.606928  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.606938  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.606965  108684 httplog.go:90] GET /healthz: (272.665µs) 0 [Go-http-client/1.1 127.0.0.1:59774]
I0917 06:43:02.618713  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.618751  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.618828  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.618840  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.618848  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.618880  108684 httplog.go:90] GET /healthz: (298.4µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.706906  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.706942  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.706954  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.706963  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.706971  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.706999  108684 httplog.go:90] GET /healthz: (297.055µs) 0 [Go-http-client/1.1 127.0.0.1:59774]
I0917 06:43:02.718792  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.718823  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.718835  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.718848  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.718856  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.718891  108684 httplog.go:90] GET /healthz: (268.411µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.806947  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.807115  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.807139  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.807149  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.807156  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.807229  108684 httplog.go:90] GET /healthz: (403.099µs) 0 [Go-http-client/1.1 127.0.0.1:59774]
I0917 06:43:02.818747  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.818856  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.818871  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.818881  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.818889  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.818923  108684 httplog.go:90] GET /healthz: (345.577µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.907075  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.907110  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.907122  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.907131  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.907139  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.907168  108684 httplog.go:90] GET /healthz: (251.572µs) 0 [Go-http-client/1.1 127.0.0.1:59774]
I0917 06:43:02.918683  108684 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0917 06:43:02.918714  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:02.918727  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:02.918737  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:02.918744  108684 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:02.918790  108684 httplog.go:90] GET /healthz: (236.28µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:02.952819  108684 client.go:361] parsed scheme: "endpoint"
I0917 06:43:02.952902  108684 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0917 06:43:03.010020  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.010050  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:03.010061  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:03.010069  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:03.010120  108684 httplog.go:90] GET /healthz: (3.465558ms) 0 [Go-http-client/1.1 127.0.0.1:59774]
E0917 06:43:03.012660  108684 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37625/apis/events.k8s.io/v1beta1/namespaces/permit-plugin999ff825-6506-4fd3-a897-7f161efaac4a/events: dial tcp 127.0.0.1:37625: connect: connection refused' (may retry after sleeping)
I0917 06:43:03.019541  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.019586  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:03.019597  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:03.019605  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:03.019639  108684 httplog.go:90] GET /healthz: (900.533µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:03.107888  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.107919  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:03.107929  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:03.107937  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:03.107974  108684 httplog.go:90] GET /healthz: (1.313899ms) 0 [Go-http-client/1.1 127.0.0.1:59774]
I0917 06:43:03.123340  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.123370  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:03.123381  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:03.123390  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:03.123431  108684 httplog.go:90] GET /healthz: (3.391763ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:03.207644  108684 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.227055ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.208036  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.208066  108684 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0917 06:43:03.208075  108684 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0917 06:43:03.208083  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0917 06:43:03.208112  108684 httplog.go:90] GET /healthz: (1.103915ms) 0 [Go-http-client/1.1 127.0.0.1:59830]
I0917 06:43:03.208424  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.983186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:03.208525  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.943441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.210422  108684 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.780333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.210574  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.830521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59774]
I0917 06:43:03.210746  108684 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0917 06:43:03.213124  108684 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.731283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.214890  108684 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (4.722935ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.217137  108684 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.467505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.217287  108684 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0917 06:43:03.217302  108684 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0917 06:43:03.217492  108684 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.320301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.223607  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (12.503884ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59830]
I0917 06:43:03.224212  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.224235  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.224266  108684 httplog.go:90] GET /healthz: (1.122471ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.226351  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.282905ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59830]
I0917 06:43:03.229679  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.303837ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.231647  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.264963ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.234037  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.706074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.235516  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.099398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.237832  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.144244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.239005  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (785.396µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.244435  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.768992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.244584  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0917 06:43:03.245696  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (952.25µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.250148  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.009893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.250413  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0917 06:43:03.254895  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (4.271828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.257247  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.984926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.257732  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0917 06:43:03.259097  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.170119ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.261308  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.727794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.261590  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0917 06:43:03.264268  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.444582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.267624  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.788792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.267871  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0917 06:43:03.269323  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (973.227µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.271458  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.708329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.271665  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0917 06:43:03.273086  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.123587ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.275191  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.660625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.275567  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0917 06:43:03.276776  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (954.763µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.279987  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.799443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.280315  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0917 06:43:03.282232  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.756287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.288093  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.01192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.288397  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0917 06:43:03.290922  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (2.257518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.298087  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.686313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.298427  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0917 06:43:03.299663  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.000351ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.302863  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.652537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.303250  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0917 06:43:03.304463  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.00546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.308020  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.308043  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.308074  108684 httplog.go:90] GET /healthz: (1.47824ms) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:03.308930  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.818218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.309291  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0917 06:43:03.311615  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.611832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.315615  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.207851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.315873  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0917 06:43:03.317881  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.7819ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.320237  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.975914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.320445  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.320463  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0917 06:43:03.320466  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.320688  108684 httplog.go:90] GET /healthz: (2.249478ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.321720  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (819.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.323994  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.785899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.324198  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0917 06:43:03.326049  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.647221ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.328546  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.904369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.328817  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0917 06:43:03.330611  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.519238ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.333583  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.306848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.333839  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0917 06:43:03.335069  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.03639ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.340974  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.426787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.341233  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0917 06:43:03.342674  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.112237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.349050  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.721774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.349624  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0917 06:43:03.351226  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.191844ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.355367  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.637693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.355868  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0917 06:43:03.357753  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.436021ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.360594  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.134176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.361101  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0917 06:43:03.362470  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.032159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.366568  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.158384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.368192  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0917 06:43:03.369600  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.060369ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.372578  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.206768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.372977  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0917 06:43:03.374966  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.477765ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.377665  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.044329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.378037  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0917 06:43:03.379627  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.316694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.383104  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.966562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.383684  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0917 06:43:03.387879  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (3.345207ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.392989  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.087534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.393839  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0917 06:43:03.398080  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (3.851035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.400989  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.100327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.401449  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0917 06:43:03.403147  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.411048ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.406477  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.311002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.406708  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0917 06:43:03.407499  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.407535  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.407592  108684 httplog.go:90] GET /healthz: (996.494µs) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:03.407560  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (619.054µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.410276  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.029551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.410496  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0917 06:43:03.412476  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.750805ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.415588  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.440469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.416433  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0917 06:43:03.417931  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.221146ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.419408  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.419437  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.419481  108684 httplog.go:90] GET /healthz: (953.844µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.420843  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.959299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.421574  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0917 06:43:03.422900  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (996.886µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.426279  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.081478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.426477  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0917 06:43:03.427488  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (854.58µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.430540  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.532657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.431243  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0917 06:43:03.432503  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.017691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.435886  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.957628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.436140  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0917 06:43:03.437415  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.021416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.439797  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.627567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.439997  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0917 06:43:03.441533  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (994.759µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.444825  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.932073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.445013  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0917 06:43:03.445997  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (810.442µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.448669  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.11455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.449028  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0917 06:43:03.450249  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (985.917µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.453586  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.930123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.454713  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0917 06:43:03.455901  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (953.498µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.457741  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.417141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.458009  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0917 06:43:03.459043  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (852.943µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.461427  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.957652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.461700  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0917 06:43:03.463242  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.23629ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.468482  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.590893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.468833  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0917 06:43:03.470231  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.056899ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.473486  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.768438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.473743  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0917 06:43:03.474899  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (917.663µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.476871  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.477188  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0917 06:43:03.478438  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (975.867µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.480494  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.642759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.480723  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0917 06:43:03.481884  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (838.54µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.486680  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.665154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.486938  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0917 06:43:03.488196  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.04102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.490550  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.565018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.490876  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0917 06:43:03.493354  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (2.135752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.496028  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.946197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.496396  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0917 06:43:03.497484  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (892.505µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.499318  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.452101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.499520  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0917 06:43:03.501259  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.439549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.503995  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.886281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.504369  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0917 06:43:03.506704  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.878444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.507650  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.507846  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.508075  108684 httplog.go:90] GET /healthz: (1.489102ms) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:03.509189  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.005449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.509424  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0917 06:43:03.510445  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (802.507µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.513031  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.184481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.513379  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0917 06:43:03.514686  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (961.732µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.517407  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.149345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.517755  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0917 06:43:03.519251  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.519285  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.519316  108684 httplog.go:90] GET /healthz: (762.276µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.519306  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.346535ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.521131  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.312394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.521446  108684 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0917 06:43:03.527237  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.292539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.548224  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.258251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.548562  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0917 06:43:03.568346  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.22935ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.588590  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.993411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.589018  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0917 06:43:03.607595  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.607830  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.752196ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.607935  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.607977  108684 httplog.go:90] GET /healthz: (1.359932ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:03.619656  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.619688  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.619732  108684 httplog.go:90] GET /healthz: (1.191879ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.629096  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.600259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.629390  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0917 06:43:03.647188  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.141453ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.669736  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.608242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.670216  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0917 06:43:03.687452  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.336912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.707541  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.707570  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.707610  108684 httplog.go:90] GET /healthz: (969.377µs) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:03.708170  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.14139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.708493  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0917 06:43:03.720314  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.720348  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.720391  108684 httplog.go:90] GET /healthz: (1.729361ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.727954  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.46655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.754571  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.040447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.754962  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0917 06:43:03.769341  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.132338ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.788855  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.349514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.790926  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0917 06:43:03.807205  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.223922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:03.807593  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.807616  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.807659  108684 httplog.go:90] GET /healthz: (1.062699ms) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:03.819885  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.819923  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.819962  108684 httplog.go:90] GET /healthz: (1.336619ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.828818  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.286854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.829092  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0917 06:43:03.847706  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.68911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.868354  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.204562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.869824  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0917 06:43:03.887469  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.335332ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.908436  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.908466  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.908508  108684 httplog.go:90] GET /healthz: (1.901585ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:03.908545  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.528516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.909272  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0917 06:43:03.919745  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:03.919838  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:03.919878  108684 httplog.go:90] GET /healthz: (1.309921ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.927831  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.113117ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.949280  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.343707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.949822  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0917 06:43:03.969004  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.657582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.988027  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.908384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:03.988355  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0917 06:43:04.008508  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.008539  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.008581  108684 httplog.go:90] GET /healthz: (1.932334ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:04.008633  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.57143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.019590  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.019625  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.019667  108684 httplog.go:90] GET /healthz: (1.066402ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.028103  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.028372  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0917 06:43:04.048273  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (2.197829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.070777  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.644994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.071071  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0917 06:43:04.087081  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.102292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.109290  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.318783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.109440  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.109456  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.109487  108684 httplog.go:90] GET /healthz: (2.187864ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:04.109819  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0917 06:43:04.121089  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.121130  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.121192  108684 httplog.go:90] GET /healthz: (2.008765ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.127161  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.16121ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.149107  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.008653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.149329  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0917 06:43:04.167457  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.479957ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.188844  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.68157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.189318  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0917 06:43:04.211066  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.211103  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.211165  108684 httplog.go:90] GET /healthz: (4.59958ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:04.211190  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (5.191854ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.219379  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.219408  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.219444  108684 httplog.go:90] GET /healthz: (919.093µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.234674  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.132097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.235093  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0917 06:43:04.247422  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.400666ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.268814  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.837657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.269080  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0917 06:43:04.288735  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.679834ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.313129  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.313180  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.313226  108684 httplog.go:90] GET /healthz: (6.040232ms) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:04.313890  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.88838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.314172  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0917 06:43:04.319317  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.319344  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.319392  108684 httplog.go:90] GET /healthz: (758.817µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.327486  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.433174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.348617  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.111968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.348933  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0917 06:43:04.367473  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.471647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.388047  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.015911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.388575  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0917 06:43:04.410272  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.410313  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.410351  108684 httplog.go:90] GET /healthz: (1.180441ms) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:04.410722  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.555761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.422175  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.422208  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.422259  108684 httplog.go:90] GET /healthz: (1.278829ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.427992  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.989587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.428289  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0917 06:43:04.447188  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.236237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.468119  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.102429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.468536  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0917 06:43:04.487326  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.131156ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.507727  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.507864  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.507930  108684 httplog.go:90] GET /healthz: (1.365078ms) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:04.508400  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.269537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.508643  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0917 06:43:04.519945  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.519986  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.520037  108684 httplog.go:90] GET /healthz: (1.37801ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.527173  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.126709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.547998  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.043425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.548225  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0917 06:43:04.573223  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.567621ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.588853  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.305171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.589083  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0917 06:43:04.608110  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.608141  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.608197  108684 httplog.go:90] GET /healthz: (1.168254ms) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:04.608197  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.814625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.619996  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.620063  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.620111  108684 httplog.go:90] GET /healthz: (1.523658ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.628865  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.630302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.629207  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0917 06:43:04.647088  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.146129ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.670284  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.255626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.670634  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0917 06:43:04.687484  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.508635ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.708018  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.941742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.708313  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0917 06:43:04.708438  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.708453  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.708483  108684 httplog.go:90] GET /healthz: (1.883399ms) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:04.719755  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.719832  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.719871  108684 httplog.go:90] GET /healthz: (1.30103ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.727458  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.485968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.748199  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.224565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.748464  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0917 06:43:04.767022  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.004458ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.789489  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.444362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.789800  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0917 06:43:04.807581  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.337842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.807966  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.807989  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.808015  108684 httplog.go:90] GET /healthz: (1.202606ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:04.833303  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.833340  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.833383  108684 httplog.go:90] GET /healthz: (14.477212ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:04.838489  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (12.459979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.838802  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0917 06:43:04.848346  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (2.249105ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.868089  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.089331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.868350  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0917 06:43:04.887100  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.128028ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.909053  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.102618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.909216  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.909248  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.909296  108684 httplog.go:90] GET /healthz: (2.453634ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:04.909641  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0917 06:43:04.919792  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:04.919825  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:04.919863  108684 httplog.go:90] GET /healthz: (1.286755ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.927117  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.122399ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.948142  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.101657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.948392  108684 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0917 06:43:04.967385  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.44443ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.969033  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.22339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.988628  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.40402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:04.989008  108684 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0917 06:43:05.007586  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.663052ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.009376  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.009402  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.009438  108684 httplog.go:90] GET /healthz: (2.95712ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:05.010478  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.761075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.019841  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.019873  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.019911  108684 httplog.go:90] GET /healthz: (1.297683ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.030364  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.559081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.030618  108684 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0917 06:43:05.047894  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.172669ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.049833  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.437204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.068455  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.384472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.068998  108684 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0917 06:43:05.088571  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (2.320701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.090339  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.29493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.107698  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.107732  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.107789  108684 httplog.go:90] GET /healthz: (1.021675ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:05.108311  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.303009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.108592  108684 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0917 06:43:05.121455  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.121503  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.121540  108684 httplog.go:90] GET /healthz: (1.17938ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.127433  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.482996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.129484  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.587837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.148249  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.23312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.148541  108684 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0917 06:43:05.167349  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.33812ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.171943  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.76805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.191030  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.519677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.191391  108684 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0917 06:43:05.208798  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.750693ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.208938  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.208952  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.208977  108684 httplog.go:90] GET /healthz: (2.379217ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:05.211864  108684 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.311777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.222245  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.222279  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.222351  108684 httplog.go:90] GET /healthz: (1.162224ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.228665  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.597042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.228938  108684 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0917 06:43:05.247232  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.248583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.249501  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.460386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.268316  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.175484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.268610  108684 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0917 06:43:05.287161  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.209115ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.291818  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.263575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.308560  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.308610  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.308663  108684 httplog.go:90] GET /healthz: (2.104359ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:05.308738  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.768991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.309343  108684 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0917 06:43:05.319494  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.319526  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.319566  108684 httplog.go:90] GET /healthz: (1.058492ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.327331  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.343149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.329487  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.416019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.348464  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.481046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.348809  108684 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0917 06:43:05.373828  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.678451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.375907  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.427428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.388498  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.545155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.388824  108684 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0917 06:43:05.409573  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.409602  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.409644  108684 httplog.go:90] GET /healthz: (2.935481ms) 0 [Go-http-client/1.1 127.0.0.1:59770]
I0917 06:43:05.410415  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.373412ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.412482  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.643838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.419603  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.419631  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.419663  108684 httplog.go:90] GET /healthz: (919.685µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.428834  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.156619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.429111  108684 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0917 06:43:05.447620  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.642104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.449485  108684 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.364532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.468265  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.226002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.468895  108684 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0917 06:43:05.487465  108684 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.431953ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.489685  108684 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.449785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.509181  108684 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.517333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.509184  108684 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0917 06:43:05.509295  108684 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0917 06:43:05.509327  108684 httplog.go:90] GET /healthz: (2.587779ms) 0 [Go-http-client/1.1 127.0.0.1:59828]
I0917 06:43:05.509472  108684 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0917 06:43:05.522209  108684 httplog.go:90] GET /healthz: (3.644581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.524875  108684 httplog.go:90] GET /api/v1/namespaces/default: (2.070586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.529173  108684 httplog.go:90] POST /api/v1/namespaces: (3.753576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.531542  108684 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.957873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.541517  108684 httplog.go:90] POST /api/v1/namespaces/default/services: (8.966671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.545016  108684 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.022203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.549446  108684 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (3.674958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.608184  108684 httplog.go:90] GET /healthz: (1.488254ms) 200 [Go-http-client/1.1 127.0.0.1:59828]
W0917 06:43:05.608935  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.608976  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.608995  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.609026  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.609036  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.609045  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.609058  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.609069  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.609077  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.609092  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0917 06:43:05.609134  108684 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0917 06:43:05.609151  108684 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0917 06:43:05.609160  108684 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0917 06:43:05.609349  108684 shared_informer.go:197] Waiting for caches to sync for scheduler
I0917 06:43:05.609539  108684 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0917 06:43:05.609552  108684 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0917 06:43:05.610491  108684 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (647.961µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:05.611540  108684 get.go:251] Starting watch for /api/v1/pods, rv=30144 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m28s
I0917 06:43:05.709881  108684 shared_informer.go:227] caches populated
I0917 06:43:05.709918  108684 shared_informer.go:204] Caches are synced for scheduler 
I0917 06:43:05.710252  108684 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.710282  108684 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.710692  108684 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.710714  108684 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.711081  108684 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.711107  108684 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.711396  108684 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.711419  108684 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.711733  108684 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.711752  108684 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.711865  108684 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.711895  108684 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.712886  108684 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (416.82µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:05.713367  108684 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (523.092µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59884]
I0917 06:43:05.713467  108684 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.713488  108684 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.713521  108684 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30144 labels= fields= timeout=6m16s
I0917 06:43:05.713973  108684 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.713993  108684 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.714087  108684 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (364.234µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59884]
I0917 06:43:05.714169  108684 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.714184  108684 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.714241  108684 get.go:251] Starting watch for /api/v1/nodes, rv=30144 labels= fields= timeout=5m29s
I0917 06:43:05.714450  108684 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.714464  108684 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0917 06:43:05.714801  108684 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (359.074µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59884]
I0917 06:43:05.715037  108684 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (497.191µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59888]
I0917 06:43:05.715299  108684 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (336.23µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59880]
I0917 06:43:05.715795  108684 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (836.747µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59890]
I0917 06:43:05.715795  108684 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30144 labels= fields= timeout=5m18s
I0917 06:43:05.716092  108684 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (708.929µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59882]
I0917 06:43:05.716271  108684 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30144 labels= fields= timeout=5m12s
I0917 06:43:05.716357  108684 get.go:251] Starting watch for /api/v1/services, rv=30573 labels= fields= timeout=8m48s
I0917 06:43:05.716923  108684 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30144 labels= fields= timeout=9m40s
I0917 06:43:05.717034  108684 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30144 labels= fields= timeout=7m33s
I0917 06:43:05.717235  108684 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30144 labels= fields= timeout=5m0s
I0917 06:43:05.717425  108684 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (2.65304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59878]
I0917 06:43:05.718029  108684 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30144 labels= fields= timeout=7m11s
I0917 06:43:05.718073  108684 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (451.492µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59876]
I0917 06:43:05.718669  108684 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30144 labels= fields= timeout=7m10s
I0917 06:43:05.810236  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810268  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810276  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810282  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810288  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810294  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810300  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810307  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810313  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810322  108684 shared_informer.go:227] caches populated
I0917 06:43:05.810332  108684 shared_informer.go:227] caches populated
I0917 06:43:05.813948  108684 httplog.go:90] POST /api/v1/nodes: (2.604031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:05.814116  108684 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0917 06:43:05.821373  108684 httplog.go:90] PUT /api/v1/nodes/testnode/status: (6.855698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:05.826117  108684 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods: (3.200792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:05.826816  108684 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pidpressure-fake-name
I0917 06:43:05.826840  108684 scheduler.go:530] Attempting to schedule pod: node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pidpressure-fake-name
I0917 06:43:05.827029  108684 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pidpressure-fake-name", node "testnode"
I0917 06:43:05.827057  108684 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0917 06:43:05.827117  108684 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0917 06:43:05.830113  108684 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name/binding: (2.637792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:05.830302  108684 scheduler.go:662] pod node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0917 06:43:05.833199  108684 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/events: (2.229592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:05.929293  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.258133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:06.028883  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.927264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:06.128985  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.905089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:06.229482  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.837547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:06.340420  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (5.122454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:06.428910  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.994317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:06.528652  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.583713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:06.628750  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.836687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:06.713439  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:06.713943  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:06.716198  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:06.716850  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:06.717883  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:06.718583  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:06.729146  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.891069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
E0917 06:43:06.822446  108684 factory.go:590] Error getting pod permit-plugin999ff825-6506-4fd3-a897-7f161efaac4a/test-pod for retry: Get http://127.0.0.1:37625/api/v1/namespaces/permit-plugin999ff825-6506-4fd3-a897-7f161efaac4a/pods/test-pod: dial tcp 127.0.0.1:37625: connect: connection refused; retrying...
I0917 06:43:06.828847  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.942693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:06.928660  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.721205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.029147  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.862056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.129377  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.462618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.233180  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.705969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.329089  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.10542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.429242  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.131009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.529550  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.630280  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.407638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.713651  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:07.714183  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:07.716385  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:07.716993  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:07.718041  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:07.718718  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:07.728919  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.969026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.828707  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.820313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:07.928686  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.711436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.028657  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.700169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.129197  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.76245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.228997  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.035847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.328644  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.717854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.429837  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.602631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.530019  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.751011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.628638  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.686585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.713859  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:08.714348  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:08.716736  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:08.717126  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:08.718488  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:08.718990  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:08.729102  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.167812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.828649  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.685908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:08.930582  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.643487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.028865  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.816492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.128584  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.620641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.228599  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.718347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.328248  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.370279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.429992  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.768231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.528595  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.717371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.628601  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.675441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.714118  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:09.714447  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:09.716959  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:09.717274  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:09.718610  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:09.719120  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:09.729219  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.314359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.828485  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.61167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:09.928523  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.586619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.028646  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.68245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.128614  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.679299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.228523  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.574032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.329280  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.823033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.428514  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.557637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.528546  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.616779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.628562  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.592444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.714297  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:10.714630  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:10.717406  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:10.717441  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:10.718776  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:10.719264  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:10.729165  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.650336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.829150  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.17298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:10.929322  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.362842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.028821  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.877649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.128736  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.814598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.228823  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.764931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.328974  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.055286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.428466  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.552068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.529496  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.604894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.628959  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.935367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.714492  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:11.714856  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:11.717567  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:11.717640  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:11.718971  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:11.719410  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:11.728684  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.668038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.828995  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.999374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:11.928600  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.65354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.028728  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.775999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.128539  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.671595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.228551  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.620186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.328959  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.762823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.428878  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.835607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.528579  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.591308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.628973  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.95818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.714697  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:12.714980  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:12.717806  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:12.717848  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:12.719139  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:12.719565  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:12.728821  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.894257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.829333  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.359519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:12.928880  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.941445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.029197  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.220443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.128721  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.823601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.228781  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.825872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.328754  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.81151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.428848  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.882951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.530082  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.997816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.628504  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.392095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.715074  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:13.715123  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:13.718058  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:13.718096  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:13.719246  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:13.719722  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:13.729536  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.577702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.828812  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.718875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:13.930200  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.635487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:14.028833  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.849745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:14.128621  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.662019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:14.230029  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.994419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:14.328555  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.593401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:14.429265  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.300239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
E0917 06:43:14.448068  108684 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37625/apis/events.k8s.io/v1beta1/namespaces/permit-plugin999ff825-6506-4fd3-a897-7f161efaac4a/events: dial tcp 127.0.0.1:37625: connect: connection refused' (may retry after sleeping)
I0917 06:43:14.528777  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.78941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:14.628561  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.677699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:14.715356  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:14.715399  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:14.718434  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:14.718990  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:14.719402  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:14.719802  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:14.728786  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.738528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:14.846516  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (5.238775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:14.928565  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.674848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.028856  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.883616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.128941  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.022719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.228567  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.607295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.329115  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.207072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.430789  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.760523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.529953  108684 httplog.go:90] GET /api/v1/namespaces/default: (6.114791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.533338  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.036392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:15.533341  108684 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.247299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.540097  108684 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (5.990565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.628857  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.900581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.715479  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:15.715783  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:15.718643  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:15.719400  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:15.719967  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:15.720007  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:15.728741  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.757641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.828497  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.348953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:15.929106  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.738046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.028939  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.93914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.128919  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.994023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.229208  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.305069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.328484  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.631309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.428835  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.824762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.528415  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.545906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.628863  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.872772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.715609  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:16.715993  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:16.718849  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:16.719529  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:16.720170  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:16.720206  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:16.728891  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.899182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.828931  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.647273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:16.928811  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.880951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:17.028609  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.688932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:17.128665  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.757777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:17.228966  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.935939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:17.329157  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.202842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:17.429060  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.80557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:17.528707  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.785431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:17.628836  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.91917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:17.715966  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:17.716117  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:17.719043  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:17.719673  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:17.720284  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:17.720314  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:17.728861  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.789085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:18.200707  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (4.731951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:18.229796  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.841284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:18.328933  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.979284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:18.434447  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.558843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:18.528460  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.557185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:18.628543  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.568032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:18.716086  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:18.716425  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:18.719253  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:18.719805  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:18.720440  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:18.720458  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:18.729063  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.946208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:18.828592  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.677406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:18.929159  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.213515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.028615  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.687273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.128781  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.870444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.231249  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.643187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.329281  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.349786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.428582  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.661184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.528550  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.567784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.628676  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.739499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.716232  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:19.716526  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:19.719400  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:19.719974  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:19.720583  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:19.720585  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:19.729392  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.324534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.828341  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.516235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:19.928529  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.586616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.028709  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.694676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.128469  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.579834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.228528  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.637111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.328806  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.863044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.428520  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.604406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.528465  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.58276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.628933  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.712152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.716651  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:20.716701  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:20.719563  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:20.720060  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:20.720743  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:20.720815  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:20.728553  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.60082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.828754  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.857979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:20.928465  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.591188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.028841  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.839848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.130659  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.677111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.228960  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.924804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.329359  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.291687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.432306  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.963264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.530035  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.884341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.628502  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.585827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.716859  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:21.717459  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:21.719684  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:21.720870  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:21.720900  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:21.725291  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:21.728611  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.693545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.829510  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.492621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:21.928931  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.061158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.028425  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.519185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.128343  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.455232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.228631  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.628544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.328548  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.632913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.428673  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.687507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.529102  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.185938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.628919  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.997899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.717104  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:22.717578  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:22.719828  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:22.721010  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:22.721052  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:22.725846  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:22.730785  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.141703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.828657  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.69756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:22.934300  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.287499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.028818  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.885671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.132564  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.348088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.230571  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.600446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.329701  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.818052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.428923  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.903756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.528603  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.694399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.628398  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.496492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.717286  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:23.717707  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:23.720036  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:23.721196  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:23.721227  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:23.725982  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:23.730078  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.457747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.828914  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.025866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:23.928915  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.977699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.030282  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.406461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.137863  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (10.964239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.229351  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.819305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.333861  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (6.932476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.429410  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.453717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.528993  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.006594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.650220  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (23.298975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.717472  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:24.717910  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:24.720219  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:24.721831  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:24.721893  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:24.726132  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:24.730855  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.948854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.828970  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.072061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:24.929587  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.657657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:25.028667  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.754449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:25.128372  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.548705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:25.228533  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.601224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:25.328833  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.828546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:25.428403  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.46952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:25.524935  108684 httplog.go:90] GET /api/v1/namespaces/default: (1.781483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:25.526563  108684 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.252829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:25.528244  108684 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.275478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:25.529612  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.554973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:25.628794  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.743052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:25.717634  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:25.718057  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:25.720509  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:25.721978  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:25.722017  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:25.726327  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:25.728494  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.61053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
E0917 06:43:25.761525  108684 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37625/apis/events.k8s.io/v1beta1/namespaces/permit-plugin999ff825-6506-4fd3-a897-7f161efaac4a/events: dial tcp 127.0.0.1:37625: connect: connection refused' (may retry after sleeping)
I0917 06:43:25.828583  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.680124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:25.928623  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.645866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.028392  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.426619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.129056  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.061475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.228370  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.451812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.329494  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.657798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.428777  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.500331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.528630  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.651684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.628341  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.408746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.717840  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:26.718180  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:26.720707  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:26.722272  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:26.722334  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:26.726442  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:26.728466  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.623294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.828909  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.99348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:26.928710  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.691745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.029505  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.479809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.128582  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.680704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.228592  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.678838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.328475  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.557947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.428529  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.568975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.529067  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.143339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.628465  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.521256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.718050  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:27.718288  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:27.720855  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:27.722463  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:27.722490  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:27.726529  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:27.728569  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.586281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.828571  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.657056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:27.928681  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.754133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.028971  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.946216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.128589  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.509084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.228744  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.599181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.334381  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (7.423988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.428216  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.264389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.528426  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.544123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.628442  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.536912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.718253  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:28.718570  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:28.721055  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:28.722632  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:28.722645  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:28.726851  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:28.728831  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.903664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.828644  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.690754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:28.928507  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.531367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.028730  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.777889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.128559  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.53036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.228648  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.605887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.328412  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.513069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.430178  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.919037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.528520  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.542476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.630578  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.60514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.718438  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:29.718685  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:29.721216  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:29.722829  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:29.722895  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:29.726997  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:29.728546  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.627564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.828603  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.669025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:29.928748  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.791606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.028650  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.732094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.129965  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.126048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.229563  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.660289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.337553  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.587948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.429368  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.128683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.528509  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.584469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.628453  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.572713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.718633  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:30.718832  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:30.721456  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:30.722904  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:30.723057  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:30.727437  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:30.729110  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.246455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.828290  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.355746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:30.928525  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.515969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.028777  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.774356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.128577  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.607919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.228527  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.60202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.328470  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.572922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.428863  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.990999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.529074  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.773328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.628855  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.623713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.718856  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:31.718945  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:31.721645  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:31.723063  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:31.723325  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:31.728124  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:31.728723  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.85684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.831050  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.160227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:31.928650  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.655341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:32.028745  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.758059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:32.128818  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.897104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:32.229054  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.101883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:32.328465  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.547375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
E0917 06:43:32.423044  108684 factory.go:590] Error getting pod permit-plugin999ff825-6506-4fd3-a897-7f161efaac4a/test-pod for retry: Get http://127.0.0.1:37625/api/v1/namespaces/permit-plugin999ff825-6506-4fd3-a897-7f161efaac4a/pods/test-pod: dial tcp 127.0.0.1:37625: connect: connection refused; retrying...
I0917 06:43:32.428836  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.936647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:32.529393  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.501275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:32.628682  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.697972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:32.719061  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:32.719068  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:32.721848  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:32.723246  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:32.723463  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:32.728174  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.314896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:32.728281  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:32.829323  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.796265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:32.931468  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.788926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.030379  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.315791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.128930  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.559988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.228511  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.574615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.328355  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.465394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.429352  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.446347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.528453  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.538591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.628902  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.060405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.719291  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:33.719333  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:33.722159  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:33.723508  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:33.723672  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:33.728709  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:33.729369  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.912952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.829309  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.01725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:33.928637  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.70742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.028695  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.268491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.129381  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.01791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.228977  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.530118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.329293  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.333914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.428529  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.5965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.528410  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.465658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.628606  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.644834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.719490  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:34.719637  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:34.722371  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:34.723666  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:34.723960  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:34.729019  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:34.730082  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.11494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.828887  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.364011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:34.928344  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.443586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.028430  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.547496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.128846  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.878727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.229439  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.528708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.330240  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (3.369148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.428449  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.521645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.524982  108684 httplog.go:90] GET /api/v1/namespaces/default: (1.769918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.526587  108684 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.198789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.531748  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (2.932366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59896]
I0917 06:43:35.539431  108684 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (9.89313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.628886  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.964687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.720851  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:35.720884  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:35.722551  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:35.723816  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:35.724536  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:35.728823  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.961025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.729116  108684 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0917 06:43:35.831278  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (4.024496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.834065  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.340304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.841402  108684 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (6.545295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.844058  108684 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pods/pidpressure-fake-name: (1.050489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
E0917 06:43:35.845056  108684 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0917 06:43:35.845433  108684 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30144&timeout=6m16s&timeoutSeconds=376&watch=true: (30.132134585s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59770]
I0917 06:43:35.845562  108684 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30144&timeout=5m18s&timeoutSeconds=318&watch=true: (30.130391996s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59884]
I0917 06:43:35.845695  108684 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30573&timeout=8m48s&timeoutSeconds=528&watch=true: (30.129597527s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59888]
I0917 06:43:35.845813  108684 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30144&timeout=5m12s&timeoutSeconds=312&watch=true: (30.129833562s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59892]
I0917 06:43:35.845911  108684 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30144&timeout=9m40s&timeoutSeconds=580&watch=true: (30.129199859s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59890]
I0917 06:43:35.846006  108684 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30144&timeout=7m33s&timeoutSeconds=453&watch=true: (30.129254118s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59894]
I0917 06:43:35.846116  108684 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30144&timeout=5m0s&timeoutSeconds=300&watch=true: (30.129179299s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59882]
I0917 06:43:35.846216  108684 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30144&timeout=7m11s&timeoutSeconds=431&watch=true: (30.128465918s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59878]
I0917 06:43:35.846320  108684 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30144&timeout=7m10s&timeoutSeconds=430&watch=true: (30.127908525s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59876]
I0917 06:43:35.846420  108684 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30144&timeout=5m29s&timeoutSeconds=329&watch=true: (30.132451592s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59886]
I0917 06:43:35.846533  108684 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30144&timeoutSeconds=328&watch=true: (30.235386745s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59828]
I0917 06:43:35.852337  108684 httplog.go:90] DELETE /api/v1/nodes: (5.709788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.852615  108684 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0917 06:43:35.856223  108684 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.359294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
I0917 06:43:35.861375  108684 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (4.719698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60198]
--- FAIL: TestNodePIDPressure (33.91s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190917-063533.xml

Find node-pid-pressure2633e594-d110-45df-8d5a-4fb97bc18a0f/pidpressure-fake-name mentions in log files


Show 2862 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 913 lines ...
W0917 06:30:29.164] I0917 06:30:28.905492   52779 shared_informer.go:197] Waiting for caches to sync for daemon sets
W0917 06:30:29.164] I0917 06:30:28.905873   52779 controllermanager.go:534] Started "disruption"
W0917 06:30:29.164] I0917 06:30:28.905884   52779 disruption.go:333] Starting disruption controller
W0917 06:30:29.164] I0917 06:30:28.905905   52779 shared_informer.go:197] Waiting for caches to sync for disruption
W0917 06:30:29.164] I0917 06:30:28.906654   52779 controllermanager.go:534] Started "cronjob"
W0917 06:30:29.165] I0917 06:30:28.906708   52779 cronjob_controller.go:96] Starting CronJob Manager
W0917 06:30:29.165] E0917 06:30:28.907115   52779 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0917 06:30:29.165] W0917 06:30:28.907137   52779 controllermanager.go:526] Skipping "service"
W0917 06:30:29.165] I0917 06:30:28.907145   52779 core.go:211] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0917 06:30:29.166] W0917 06:30:28.907150   52779 controllermanager.go:526] Skipping "route"
W0917 06:30:29.166] W0917 06:30:28.907530   52779 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0917 06:30:29.166] I0917 06:30:28.908695   52779 controllermanager.go:534] Started "attachdetach"
W0917 06:30:29.167] I0917 06:30:28.908836   52779 attach_detach_controller.go:334] Starting attach detach controller
... skipping 8 lines ...
W0917 06:30:29.168] I0917 06:30:28.916005   52779 taint_manager.go:162] Sending events to api server.
W0917 06:30:29.169] I0917 06:30:28.914903   52779 cleaner.go:81] Starting CSR cleaner controller
W0917 06:30:29.169] I0917 06:30:28.916074   52779 node_lifecycle_controller.go:458] Controller will reconcile labels.
W0917 06:30:29.169] I0917 06:30:28.916134   52779 node_lifecycle_controller.go:471] Controller will taint node by condition.
W0917 06:30:29.169] I0917 06:30:28.916167   52779 controllermanager.go:534] Started "nodelifecycle"
W0917 06:30:29.169] I0917 06:30:28.916499   52779 node_lifecycle_controller.go:77] Sending events to api server
W0917 06:30:29.170] E0917 06:30:28.916594   52779 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0917 06:30:29.170] W0917 06:30:28.916621   52779 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0917 06:30:29.170] I0917 06:30:28.917021   52779 node_lifecycle_controller.go:495] Starting node controller
W0917 06:30:29.170] I0917 06:30:28.917058   52779 shared_informer.go:197] Waiting for caches to sync for taint
W0917 06:30:29.171] I0917 06:30:28.927527   52779 controllermanager.go:534] Started "namespace"
W0917 06:30:29.171] I0917 06:30:28.927981   52779 namespace_controller.go:186] Starting namespace controller
W0917 06:30:29.171] I0917 06:30:28.928174   52779 shared_informer.go:197] Waiting for caches to sync for namespace
... skipping 2 lines ...
W0917 06:30:29.171] I0917 06:30:28.928275   52779 deployment_controller.go:152] Starting deployment controller
W0917 06:30:29.172] I0917 06:30:28.928511   52779 shared_informer.go:197] Waiting for caches to sync for deployment
W0917 06:30:29.172] I0917 06:30:28.929557   52779 controllermanager.go:534] Started "persistentvolume-binder"
W0917 06:30:29.172] W0917 06:30:28.929581   52779 controllermanager.go:526] Skipping "ttl-after-finished"
W0917 06:30:29.172] I0917 06:30:28.942325   52779 pv_controller_base.go:282] Starting persistent volume controller
W0917 06:30:29.172] I0917 06:30:28.942418   52779 shared_informer.go:197] Waiting for caches to sync for persistent volume
W0917 06:30:29.173] W0917 06:30:28.956188   52779 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0917 06:30:29.173] I0917 06:30:28.985997   52779 shared_informer.go:204] Caches are synced for job 
W0917 06:30:29.173] I0917 06:30:28.989025   52779 shared_informer.go:204] Caches are synced for TTL 
W0917 06:30:29.173] I0917 06:30:28.990707   52779 shared_informer.go:204] Caches are synced for ReplicationController 
W0917 06:30:29.174] I0917 06:30:29.001670   52779 shared_informer.go:204] Caches are synced for GC 
W0917 06:30:29.174] I0917 06:30:29.002486   52779 shared_informer.go:204] Caches are synced for ReplicaSet 
W0917 06:30:29.174] I0917 06:30:29.004465   52779 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0917 06:30:29.174] I0917 06:30:29.004830   52779 shared_informer.go:204] Caches are synced for certificate-csrapproving 
W0917 06:30:29.175] I0917 06:30:29.015711   52779 shared_informer.go:204] Caches are synced for PVC protection 
W0917 06:30:29.175] E0917 06:30:29.016339   52779 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0917 06:30:29.175] E0917 06:30:29.016976   52779 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0917 06:30:29.176] I0917 06:30:29.086343   52779 shared_informer.go:204] Caches are synced for service account 
W0917 06:30:29.176] I0917 06:30:29.088409   49234 controller.go:606] quota admission added evaluator for: serviceaccounts
W0917 06:30:29.176] I0917 06:30:29.105393   52779 shared_informer.go:204] Caches are synced for PV protection 
W0917 06:30:29.176] I0917 06:30:29.109103   52779 shared_informer.go:204] Caches are synced for attach detach 
W0917 06:30:29.176] I0917 06:30:29.128512   52779 shared_informer.go:204] Caches are synced for namespace 
I0917 06:30:29.277] Successful: the flag '--client' shows correct client info
... skipping 75 lines ...
I0917 06:30:32.480] +++ working dir: /go/src/k8s.io/kubernetes
I0917 06:30:32.483] +++ command: run_RESTMapper_evaluation_tests
I0917 06:30:32.493] +++ [0917 06:30:32] Creating namespace namespace-1568701832-30661
I0917 06:30:32.574] namespace/namespace-1568701832-30661 created
I0917 06:30:32.650] Context "test" modified.
I0917 06:30:32.656] +++ [0917 06:30:32] Testing RESTMapper
I0917 06:30:32.773] +++ [0917 06:30:32] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0917 06:30:32.789] +++ exit code: 0
I0917 06:30:32.932] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0917 06:30:32.932] bindings                                                                      true         Binding
I0917 06:30:32.932] componentstatuses                 cs                                          false        ComponentStatus
I0917 06:30:32.933] configmaps                        cm                                          true         ConfigMap
I0917 06:30:32.933] endpoints                         ep                                          true         Endpoints
... skipping 613 lines ...
I0917 06:30:52.145] core.sh:241: Successful get pdb/test-pdb-1 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 2
I0917 06:30:52.223] (Bpoddisruptionbudget.policy/test-pdb-2 created
I0917 06:30:52.321] core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
I0917 06:30:52.406] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0917 06:30:52.497] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0917 06:30:52.573] (Bpoddisruptionbudget.policy/test-pdb-4 created
W0917 06:30:52.675] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0917 06:30:52.675] error: setting 'all' parameter but found a non empty selector. 
W0917 06:30:52.676] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0917 06:30:52.676] I0917 06:30:52.047295   49234 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0917 06:30:52.750] error: min-available and max-unavailable cannot be both specified
I0917 06:30:52.851] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0917 06:30:52.857] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:30:53.063] (Bpod/env-test-pod created
I0917 06:30:53.264] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0917 06:30:53.264] Name:         env-test-pod
I0917 06:30:53.264] Namespace:    test-kubectl-describe-pod
... skipping 177 lines ...
I0917 06:31:07.134] (Bpod/valid-pod patched
I0917 06:31:07.232] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0917 06:31:07.318] (Bpod/valid-pod patched
I0917 06:31:07.413] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0917 06:31:07.581] (Bpod/valid-pod patched
I0917 06:31:07.681] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0917 06:31:07.888] (B+++ [0917 06:31:07] "kubectl patch with resourceVersion 499" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0917 06:31:08.134] pod "valid-pod" deleted
I0917 06:31:08.147] pod/valid-pod replaced
I0917 06:31:08.248] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0917 06:31:08.431] (BSuccessful
I0917 06:31:08.432] message:error: --grace-period must have --force specified
I0917 06:31:08.432] has:\-\-grace-period must have \-\-force specified
I0917 06:31:08.594] Successful
I0917 06:31:08.595] message:error: --timeout must have --force specified
I0917 06:31:08.595] has:\-\-timeout must have \-\-force specified
I0917 06:31:08.774] node/node-v1-test created
W0917 06:31:08.875] W0917 06:31:08.774323   52779 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0917 06:31:08.976] node/node-v1-test replaced
I0917 06:31:09.039] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0917 06:31:09.117] (Bnode "node-v1-test" deleted
I0917 06:31:09.214] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0917 06:31:09.485] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0917 06:31:10.493] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 32 lines ...
I0917 06:31:11.569] namespace/namespace-1568701871-31227 created
I0917 06:31:11.654] Context "test" modified.
W0917 06:31:11.754] Edit cancelled, no changes made.
W0917 06:31:11.755] Edit cancelled, no changes made.
W0917 06:31:11.755] Edit cancelled, no changes made.
W0917 06:31:11.755] Edit cancelled, no changes made.
W0917 06:31:11.755] error: 'name' already has a value (valid-pod), and --overwrite is false
W0917 06:31:11.755] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0917 06:31:11.856] core.sh:610: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:11.947] (Bpod/redis-master created
I0917 06:31:11.950] pod/valid-pod created
I0917 06:31:12.073] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0917 06:31:12.164] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
... skipping 75 lines ...
I0917 06:31:18.685] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0917 06:31:18.687] +++ working dir: /go/src/k8s.io/kubernetes
I0917 06:31:18.690] +++ command: run_kubectl_create_error_tests
I0917 06:31:18.702] +++ [0917 06:31:18] Creating namespace namespace-1568701878-21214
I0917 06:31:18.777] namespace/namespace-1568701878-21214 created
I0917 06:31:18.857] Context "test" modified.
I0917 06:31:18.863] +++ [0917 06:31:18] Testing kubectl create with error
W0917 06:31:18.964] Error: must specify one of -f and -k
W0917 06:31:18.964] 
W0917 06:31:18.964] Create a resource from a file or from stdin.
W0917 06:31:18.964] 
W0917 06:31:18.965]  JSON and YAML formats are accepted.
W0917 06:31:18.965] 
W0917 06:31:18.965] Examples:
... skipping 41 lines ...
W0917 06:31:18.970] 
W0917 06:31:18.970] Usage:
W0917 06:31:18.970]   kubectl create -f FILENAME [options]
W0917 06:31:18.970] 
W0917 06:31:18.970] Use "kubectl <command> --help" for more information about a given command.
W0917 06:31:18.970] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0917 06:31:19.108] +++ [0917 06:31:19] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0917 06:31:19.208] kubectl convert is DEPRECATED and will be removed in a future version.
W0917 06:31:19.209] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0917 06:31:19.309] +++ exit code: 0
I0917 06:31:19.311] Recording: run_kubectl_apply_tests
I0917 06:31:19.311] Running command: run_kubectl_apply_tests
I0917 06:31:19.333] 
... skipping 16 lines ...
I0917 06:31:20.893] apply.sh:276: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0917 06:31:20.968] (Bpod "test-pod" deleted
I0917 06:31:21.180] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0917 06:31:21.510] I0917 06:31:21.509553   49234 client.go:361] parsed scheme: "endpoint"
W0917 06:31:21.510] I0917 06:31:21.509610   49234 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0917 06:31:21.513] I0917 06:31:21.513430   49234 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0917 06:31:21.605] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0917 06:31:21.706] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0917 06:31:21.706] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0917 06:31:21.723] +++ exit code: 0
I0917 06:31:21.762] Recording: run_kubectl_run_tests
I0917 06:31:21.762] Running command: run_kubectl_run_tests
I0917 06:31:21.786] 
... skipping 97 lines ...
I0917 06:31:24.441] Context "test" modified.
I0917 06:31:24.447] +++ [0917 06:31:24] Testing kubectl create filter
I0917 06:31:24.533] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:24.724] (Bpod/selector-test-pod created
I0917 06:31:24.818] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0917 06:31:24.897] (BSuccessful
I0917 06:31:24.898] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0917 06:31:24.898] has:pods "selector-test-pod-dont-apply" not found
I0917 06:31:24.972] pod "selector-test-pod" deleted
I0917 06:31:24.992] +++ exit code: 0
I0917 06:31:25.023] Recording: run_kubectl_apply_deployments_tests
I0917 06:31:25.024] Running command: run_kubectl_apply_deployments_tests
I0917 06:31:25.045] 
... skipping 25 lines ...
I0917 06:31:26.682] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:26.758] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:26.844] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:27.005] (Bdeployment.apps/nginx created
I0917 06:31:27.102] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0917 06:31:31.295] (BSuccessful
I0917 06:31:31.295] message:Error from server (Conflict): error when applying patch:
I0917 06:31:31.296] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568701885-9367\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0917 06:31:31.297] to:
I0917 06:31:31.297] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0917 06:31:31.297] Name: "nginx", Namespace: "namespace-1568701885-9367"
I0917 06:31:31.299] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568701885-9367\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-17T06:31:27Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568701885-9367" "resourceVersion":"592" "selfLink":"/apis/apps/v1/namespaces/namespace-1568701885-9367/deployments/nginx" "uid":"f6e60ed0-6cd8-4810-82f4-ac44c87d8857"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-17T06:31:27Z" "lastUpdateTime":"2019-09-17T06:31:27Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-17T06:31:27Z" "lastUpdateTime":"2019-09-17T06:31:27Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0917 06:31:31.300] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0917 06:31:31.300] has:Error from server (Conflict)
W0917 06:31:31.400] I0917 06:31:27.008413   52779 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568701885-9367", Name:"nginx", UID:"f6e60ed0-6cd8-4810-82f4-ac44c87d8857", APIVersion:"apps/v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0917 06:31:31.401] I0917 06:31:27.010476   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701885-9367", Name:"nginx-8484dd655", UID:"0820eebb-7a16-4b43-8349-db5bd31291bf", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-4s25k
W0917 06:31:31.401] I0917 06:31:27.013189   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701885-9367", Name:"nginx-8484dd655", UID:"0820eebb-7a16-4b43-8349-db5bd31291bf", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-bw4v5
W0917 06:31:31.402] I0917 06:31:27.014134   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701885-9367", Name:"nginx-8484dd655", UID:"0820eebb-7a16-4b43-8349-db5bd31291bf", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-xwg2z
W0917 06:31:33.092] I0917 06:31:33.091746   52779 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568701876-5723
I0917 06:31:36.545] deployment.apps/nginx configured
... skipping 146 lines ...
I0917 06:31:43.860] +++ [0917 06:31:43] Creating namespace namespace-1568701903-3465
I0917 06:31:43.934] namespace/namespace-1568701903-3465 created
I0917 06:31:44.004] Context "test" modified.
I0917 06:31:44.011] +++ [0917 06:31:44] Testing kubectl get
I0917 06:31:44.100] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:44.182] (BSuccessful
I0917 06:31:44.183] message:Error from server (NotFound): pods "abc" not found
I0917 06:31:44.183] has:pods "abc" not found
I0917 06:31:44.270] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:44.353] (BSuccessful
I0917 06:31:44.354] message:Error from server (NotFound): pods "abc" not found
I0917 06:31:44.354] has:pods "abc" not found
I0917 06:31:44.439] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:44.520] (BSuccessful
I0917 06:31:44.521] message:{
I0917 06:31:44.521]     "apiVersion": "v1",
I0917 06:31:44.521]     "items": [],
... skipping 23 lines ...
I0917 06:31:44.850] has not:No resources found
I0917 06:31:44.933] Successful
I0917 06:31:44.933] message:NAME
I0917 06:31:44.933] has not:No resources found
I0917 06:31:45.019] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:45.118] (BSuccessful
I0917 06:31:45.118] message:error: the server doesn't have a resource type "foobar"
I0917 06:31:45.118] has not:No resources found
I0917 06:31:45.201] Successful
I0917 06:31:45.201] message:No resources found in namespace-1568701903-3465 namespace.
I0917 06:31:45.201] has:No resources found
I0917 06:31:45.282] Successful
I0917 06:31:45.283] message:
I0917 06:31:45.283] has not:No resources found
I0917 06:31:45.366] Successful
I0917 06:31:45.366] message:No resources found in namespace-1568701903-3465 namespace.
I0917 06:31:45.366] has:No resources found
I0917 06:31:45.452] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:45.537] (BSuccessful
I0917 06:31:45.537] message:Error from server (NotFound): pods "abc" not found
I0917 06:31:45.537] has:pods "abc" not found
I0917 06:31:45.539] FAIL!
I0917 06:31:45.539] message:Error from server (NotFound): pods "abc" not found
I0917 06:31:45.539] has not:List
I0917 06:31:45.540] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0917 06:31:45.650] Successful
I0917 06:31:45.650] message:I0917 06:31:45.600730   62729 loader.go:375] Config loaded from file:  /tmp/tmp.HI2nkjbPt3/.kube/config
I0917 06:31:45.651] I0917 06:31:45.602225   62729 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0917 06:31:45.651] I0917 06:31:45.623066   62729 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I0917 06:31:51.206] Successful
I0917 06:31:51.207] message:NAME    DATA   AGE
I0917 06:31:51.207] one     0      1s
I0917 06:31:51.208] three   0      0s
I0917 06:31:51.208] two     0      0s
I0917 06:31:51.208] STATUS    REASON          MESSAGE
I0917 06:31:51.209] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0917 06:31:51.209] has not:watch is only supported on individual resources
I0917 06:31:52.297] Successful
I0917 06:31:52.297] message:STATUS    REASON          MESSAGE
I0917 06:31:52.297] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0917 06:31:52.297] has not:watch is only supported on individual resources
I0917 06:31:52.303] +++ [0917 06:31:52] Creating namespace namespace-1568701912-23791
I0917 06:31:52.385] namespace/namespace-1568701912-23791 created
I0917 06:31:52.459] Context "test" modified.
I0917 06:31:52.553] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:52.725] (Bpod/valid-pod created
... skipping 56 lines ...
I0917 06:31:52.830] }
I0917 06:31:52.922] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0917 06:31:53.175] (B<no value>Successful
I0917 06:31:53.175] message:valid-pod:
I0917 06:31:53.175] has:valid-pod:
I0917 06:31:53.268] Successful
I0917 06:31:53.268] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0917 06:31:53.268] 	template was:
I0917 06:31:53.268] 		{.missing}
I0917 06:31:53.269] 	object given to jsonpath engine was:
I0917 06:31:53.269] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-17T06:31:52Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568701912-23791", "resourceVersion":"694", "selfLink":"/api/v1/namespaces/namespace-1568701912-23791/pods/valid-pod", "uid":"f831dd84-9fa3-429e-9dbd-cdff0bd9a2f8"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0917 06:31:53.270] has:missing is not found
I0917 06:31:53.356] Successful
I0917 06:31:53.356] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0917 06:31:53.356] 	template was:
I0917 06:31:53.356] 		{{.missing}}
I0917 06:31:53.356] 	raw data was:
I0917 06:31:53.357] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-17T06:31:52Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568701912-23791","resourceVersion":"694","selfLink":"/api/v1/namespaces/namespace-1568701912-23791/pods/valid-pod","uid":"f831dd84-9fa3-429e-9dbd-cdff0bd9a2f8"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0917 06:31:53.357] 	object given to template engine was:
I0917 06:31:53.358] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-17T06:31:52Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568701912-23791 resourceVersion:694 selfLink:/api/v1/namespaces/namespace-1568701912-23791/pods/valid-pod uid:f831dd84-9fa3-429e-9dbd-cdff0bd9a2f8] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0917 06:31:53.358] has:map has no entry for key "missing"
W0917 06:31:53.458] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0917 06:31:54.442] Successful
I0917 06:31:54.442] message:NAME        READY   STATUS    RESTARTS   AGE
I0917 06:31:54.442] valid-pod   0/1     Pending   0          1s
I0917 06:31:54.443] STATUS      REASON          MESSAGE
I0917 06:31:54.443] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0917 06:31:54.443] has:STATUS
I0917 06:31:54.444] Successful
I0917 06:31:54.444] message:NAME        READY   STATUS    RESTARTS   AGE
I0917 06:31:54.444] valid-pod   0/1     Pending   0          1s
I0917 06:31:54.445] STATUS      REASON          MESSAGE
I0917 06:31:54.445] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0917 06:31:54.445] has:valid-pod
I0917 06:31:55.525] Successful
I0917 06:31:55.525] message:pod/valid-pod
I0917 06:31:55.525] has not:STATUS
I0917 06:31:55.526] Successful
I0917 06:31:55.526] message:pod/valid-pod
... skipping 72 lines ...
I0917 06:31:56.622] status:
I0917 06:31:56.622]   phase: Pending
I0917 06:31:56.622]   qosClass: Guaranteed
I0917 06:31:56.622] ---
I0917 06:31:56.623] has:name: valid-pod
I0917 06:31:56.687] Successful
I0917 06:31:56.688] message:Error from server (NotFound): pods "invalid-pod" not found
I0917 06:31:56.688] has:"invalid-pod" not found
I0917 06:31:56.764] pod "valid-pod" deleted
I0917 06:31:56.854] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:31:57.005] (Bpod/redis-master created
I0917 06:31:57.008] pod/valid-pod created
I0917 06:31:57.096] Successful
... skipping 35 lines ...
I0917 06:31:58.138] +++ command: run_kubectl_exec_pod_tests
I0917 06:31:58.149] +++ [0917 06:31:58] Creating namespace namespace-1568701918-28079
I0917 06:31:58.220] namespace/namespace-1568701918-28079 created
I0917 06:31:58.287] Context "test" modified.
I0917 06:31:58.293] +++ [0917 06:31:58] Testing kubectl exec POD COMMAND
I0917 06:31:58.374] Successful
I0917 06:31:58.374] message:Error from server (NotFound): pods "abc" not found
I0917 06:31:58.375] has:pods "abc" not found
I0917 06:31:58.520] pod/test-pod created
I0917 06:31:58.613] Successful
I0917 06:31:58.614] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0917 06:31:58.614] has not:pods "test-pod" not found
I0917 06:31:58.616] Successful
I0917 06:31:58.616] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0917 06:31:58.616] has not:pod or type/name must be specified
I0917 06:31:58.691] pod "test-pod" deleted
I0917 06:31:58.710] +++ exit code: 0
I0917 06:31:58.742] Recording: run_kubectl_exec_resource_name_tests
I0917 06:31:58.743] Running command: run_kubectl_exec_resource_name_tests
I0917 06:31:58.765] 
... skipping 2 lines ...
I0917 06:31:58.773] +++ command: run_kubectl_exec_resource_name_tests
I0917 06:31:58.784] +++ [0917 06:31:58] Creating namespace namespace-1568701918-2274
I0917 06:31:58.856] namespace/namespace-1568701918-2274 created
I0917 06:31:58.925] Context "test" modified.
I0917 06:31:58.931] +++ [0917 06:31:58] Testing kubectl exec TYPE/NAME COMMAND
I0917 06:31:59.027] Successful
I0917 06:31:59.027] message:error: the server doesn't have a resource type "foo"
I0917 06:31:59.028] has:error:
I0917 06:31:59.107] Successful
I0917 06:31:59.108] message:Error from server (NotFound): deployments.apps "bar" not found
I0917 06:31:59.108] has:"bar" not found
I0917 06:31:59.253] pod/test-pod created
I0917 06:31:59.405] replicaset.apps/frontend created
W0917 06:31:59.506] I0917 06:31:59.408390   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701918-2274", Name:"frontend", UID:"109b7f66-69f4-4a39-ae16-8a56dbf0b59f", APIVersion:"apps/v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qckjk
W0917 06:31:59.507] I0917 06:31:59.411260   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701918-2274", Name:"frontend", UID:"109b7f66-69f4-4a39-ae16-8a56dbf0b59f", APIVersion:"apps/v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r7v5h
W0917 06:31:59.508] I0917 06:31:59.411407   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701918-2274", Name:"frontend", UID:"109b7f66-69f4-4a39-ae16-8a56dbf0b59f", APIVersion:"apps/v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cl56h
I0917 06:31:59.608] configmap/test-set-env-config created
I0917 06:31:59.656] Successful
I0917 06:31:59.656] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0917 06:31:59.657] has:not implemented
I0917 06:31:59.746] Successful
I0917 06:31:59.746] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0917 06:31:59.746] has not:not found
I0917 06:31:59.748] Successful
I0917 06:31:59.748] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0917 06:31:59.748] has not:pod or type/name must be specified
I0917 06:31:59.845] Successful
I0917 06:31:59.846] message:Error from server (BadRequest): pod frontend-cl56h does not have a host assigned
I0917 06:31:59.846] has not:not found
I0917 06:31:59.847] Successful
I0917 06:31:59.848] message:Error from server (BadRequest): pod frontend-cl56h does not have a host assigned
I0917 06:31:59.848] has not:pod or type/name must be specified
I0917 06:31:59.924] pod "test-pod" deleted
I0917 06:32:00.004] replicaset.apps "frontend" deleted
I0917 06:32:00.083] configmap "test-set-env-config" deleted
I0917 06:32:00.101] +++ exit code: 0
I0917 06:32:00.136] Recording: run_create_secret_tests
I0917 06:32:00.137] Running command: run_create_secret_tests
I0917 06:32:00.158] 
I0917 06:32:00.160] +++ Running case: test-cmd.run_create_secret_tests 
I0917 06:32:00.163] +++ working dir: /go/src/k8s.io/kubernetes
I0917 06:32:00.165] +++ command: run_create_secret_tests
I0917 06:32:00.263] Successful
I0917 06:32:00.263] message:Error from server (NotFound): secrets "mysecret" not found
I0917 06:32:00.263] has:secrets "mysecret" not found
I0917 06:32:00.419] Successful
I0917 06:32:00.419] message:Error from server (NotFound): secrets "mysecret" not found
I0917 06:32:00.420] has:secrets "mysecret" not found
I0917 06:32:00.421] Successful
I0917 06:32:00.421] message:user-specified
I0917 06:32:00.421] has:user-specified
I0917 06:32:00.493] Successful
I0917 06:32:00.568] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"d47c2347-b2ec-4c8f-98dd-8917c47b6cce","resourceVersion":"768","creationTimestamp":"2019-09-17T06:32:00Z"}}
... skipping 2 lines ...
I0917 06:32:00.729] has:uid
I0917 06:32:00.805] Successful
I0917 06:32:00.806] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"d47c2347-b2ec-4c8f-98dd-8917c47b6cce","resourceVersion":"769","creationTimestamp":"2019-09-17T06:32:00Z"},"data":{"key1":"config1"}}
I0917 06:32:00.806] has:config1
I0917 06:32:00.874] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"d47c2347-b2ec-4c8f-98dd-8917c47b6cce"}}
I0917 06:32:00.965] Successful
I0917 06:32:00.965] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0917 06:32:00.965] has:configmaps "tester-update-cm" not found
I0917 06:32:00.977] +++ exit code: 0
I0917 06:32:01.010] Recording: run_kubectl_create_kustomization_directory_tests
I0917 06:32:01.011] Running command: run_kubectl_create_kustomization_directory_tests
I0917 06:32:01.034] 
I0917 06:32:01.037] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I0917 06:32:03.773] valid-pod   0/1     Pending   0          0s
I0917 06:32:03.773] has:valid-pod
I0917 06:32:04.864] Successful
I0917 06:32:04.864] message:NAME        READY   STATUS    RESTARTS   AGE
I0917 06:32:04.864] valid-pod   0/1     Pending   0          0s
I0917 06:32:04.864] STATUS      REASON          MESSAGE
I0917 06:32:04.865] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0917 06:32:04.865] has:Timeout exceeded while reading body
I0917 06:32:04.949] Successful
I0917 06:32:04.949] message:NAME        READY   STATUS    RESTARTS   AGE
I0917 06:32:04.949] valid-pod   0/1     Pending   0          1s
I0917 06:32:04.949] has:valid-pod
I0917 06:32:05.020] Successful
I0917 06:32:05.021] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0917 06:32:05.021] has:Invalid timeout value
I0917 06:32:05.096] pod "valid-pod" deleted
I0917 06:32:05.115] +++ exit code: 0
I0917 06:32:05.146] Recording: run_crd_tests
I0917 06:32:05.146] Running command: run_crd_tests
I0917 06:32:05.167] 
... skipping 155 lines ...
I0917 06:32:09.630] foo.company.com/test patched
I0917 06:32:09.720] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0917 06:32:09.798] (Bfoo.company.com/test patched
I0917 06:32:09.885] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0917 06:32:09.964] (Bfoo.company.com/test patched
I0917 06:32:10.053] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0917 06:32:10.200] (B+++ [0917 06:32:10] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0917 06:32:10.265] {
I0917 06:32:10.265]     "apiVersion": "company.com/v1",
I0917 06:32:10.265]     "kind": "Foo",
I0917 06:32:10.266]     "metadata": {
I0917 06:32:10.266]         "annotations": {
I0917 06:32:10.266]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 193 lines ...
I0917 06:32:37.637] (Bnamespace/non-native-resources created
I0917 06:32:37.802] bar.company.com/test created
I0917 06:32:37.910] crd.sh:455: Successful get bars {{len .items}}: 1
I0917 06:32:37.990] (Bnamespace "non-native-resources" deleted
I0917 06:32:43.230] crd.sh:458: Successful get bars {{len .items}}: 0
I0917 06:32:43.409] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0917 06:32:43.510] Error from server (NotFound): namespaces "non-native-resources" not found
I0917 06:32:43.611] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0917 06:32:43.627] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0917 06:32:43.737] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0917 06:32:43.770] +++ exit code: 0
I0917 06:32:43.807] Recording: run_cmd_with_img_tests
I0917 06:32:43.807] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0917 06:32:44.167] I0917 06:32:44.155998   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701963-12864", Name:"test1-6cdffdb5b8", UID:"3201afcb-8b39-442d-ba5e-7e15f9302e4f", APIVersion:"apps/v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-lmcz5
I0917 06:32:44.268] Successful
I0917 06:32:44.268] message:deployment.apps/test1 created
I0917 06:32:44.268] has:deployment.apps/test1 created
I0917 06:32:44.268] deployment.apps "test1" deleted
I0917 06:32:44.331] Successful
I0917 06:32:44.332] message:error: Invalid image name "InvalidImageName": invalid reference format
I0917 06:32:44.332] has:error: Invalid image name "InvalidImageName": invalid reference format
I0917 06:32:44.345] +++ exit code: 0
I0917 06:32:44.386] +++ [0917 06:32:44] Testing recursive resources
I0917 06:32:44.392] +++ [0917 06:32:44] Creating namespace namespace-1568701964-27133
I0917 06:32:44.469] namespace/namespace-1568701964-27133 created
I0917 06:32:44.548] Context "test" modified.
I0917 06:32:44.642] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:32:44.953] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:44.956] (BSuccessful
I0917 06:32:44.956] message:pod/busybox0 created
I0917 06:32:44.957] pod/busybox1 created
I0917 06:32:44.957] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0917 06:32:44.957] has:error validating data: kind not set
I0917 06:32:45.053] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:45.239] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0917 06:32:45.242] (BSuccessful
I0917 06:32:45.243] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:45.243] has:Object 'Kind' is missing
I0917 06:32:45.339] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:45.629] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0917 06:32:45.632] (BSuccessful
I0917 06:32:45.632] message:pod/busybox0 replaced
I0917 06:32:45.632] pod/busybox1 replaced
I0917 06:32:45.632] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0917 06:32:45.632] has:error validating data: kind not set
I0917 06:32:45.729] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:45.829] (BSuccessful
I0917 06:32:45.829] message:Name:         busybox0
I0917 06:32:45.829] Namespace:    namespace-1568701964-27133
I0917 06:32:45.829] Priority:     0
I0917 06:32:45.829] Node:         <none>
... skipping 159 lines ...
I0917 06:32:45.844] has:Object 'Kind' is missing
I0917 06:32:45.942] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:46.136] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0917 06:32:46.140] (BSuccessful
I0917 06:32:46.140] message:pod/busybox0 annotated
I0917 06:32:46.140] pod/busybox1 annotated
I0917 06:32:46.141] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:46.141] has:Object 'Kind' is missing
I0917 06:32:46.232] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:46.539] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0917 06:32:46.541] (BSuccessful
I0917 06:32:46.542] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0917 06:32:46.542] pod/busybox0 configured
I0917 06:32:46.543] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0917 06:32:46.543] pod/busybox1 configured
I0917 06:32:46.543] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0917 06:32:46.543] has:error validating data: kind not set
I0917 06:32:46.627] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:32:46.778] (Bdeployment.apps/nginx created
W0917 06:32:46.879] W0917 06:32:44.417754   49234 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0917 06:32:46.880] E0917 06:32:44.419399   52779 reflector.go:275] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.880] W0917 06:32:44.527550   49234 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0917 06:32:46.880] E0917 06:32:44.529036   52779 reflector.go:275] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.881] W0917 06:32:44.635151   49234 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0917 06:32:46.881] E0917 06:32:44.636715   52779 reflector.go:275] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.881] W0917 06:32:44.746343   49234 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0917 06:32:46.881] E0917 06:32:44.748168   52779 reflector.go:275] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.882] E0917 06:32:45.420666   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.882] E0917 06:32:45.530130   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.882] E0917 06:32:45.639249   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.882] E0917 06:32:45.749684   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.883] E0917 06:32:46.422052   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.883] E0917 06:32:46.531530   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.884] E0917 06:32:46.640873   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.884] E0917 06:32:46.751380   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:46.885] I0917 06:32:46.782980   52779 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568701964-27133", Name:"nginx", UID:"d98366bc-e027-4b44-b546-131886f5a3b1", APIVersion:"apps/v1", ResourceVersion:"951", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W0917 06:32:46.886] I0917 06:32:46.786647   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701964-27133", Name:"nginx-f87d999f7", UID:"9fc10954-0d5c-4966-907a-b846e12222dc", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-6jgdb
W0917 06:32:46.886] I0917 06:32:46.799184   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701964-27133", Name:"nginx-f87d999f7", UID:"9fc10954-0d5c-4966-907a-b846e12222dc", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-qrlx4
W0917 06:32:46.887] I0917 06:32:46.799638   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701964-27133", Name:"nginx-f87d999f7", UID:"9fc10954-0d5c-4966-907a-b846e12222dc", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-wq7xr
I0917 06:32:46.988] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0917 06:32:46.996] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 44 lines ...
I0917 06:32:47.245] deployment.apps "nginx" deleted
I0917 06:32:47.341] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:47.512] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:47.514] (BSuccessful
I0917 06:32:47.514] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0917 06:32:47.514] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0917 06:32:47.515] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:47.515] has:Object 'Kind' is missing
I0917 06:32:47.604] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:47.689] (BSuccessful
I0917 06:32:47.690] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:47.690] has:busybox0:busybox1:
I0917 06:32:47.691] Successful
I0917 06:32:47.692] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:47.692] has:Object 'Kind' is missing
I0917 06:32:47.783] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:47.873] (Bpod/busybox0 labeled
I0917 06:32:47.873] pod/busybox1 labeled
I0917 06:32:47.874] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:47.963] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0917 06:32:47.966] (BSuccessful
I0917 06:32:47.966] message:pod/busybox0 labeled
I0917 06:32:47.966] pod/busybox1 labeled
I0917 06:32:47.967] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:47.967] has:Object 'Kind' is missing
I0917 06:32:48.057] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:48.139] (Bpod/busybox0 patched
I0917 06:32:48.140] pod/busybox1 patched
I0917 06:32:48.140] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:48.235] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0917 06:32:48.238] (BSuccessful
I0917 06:32:48.238] message:pod/busybox0 patched
I0917 06:32:48.238] pod/busybox1 patched
I0917 06:32:48.239] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:48.239] has:Object 'Kind' is missing
I0917 06:32:48.331] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:48.510] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:32:48.512] (BSuccessful
I0917 06:32:48.513] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0917 06:32:48.513] pod "busybox0" force deleted
I0917 06:32:48.513] pod "busybox1" force deleted
I0917 06:32:48.514] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0917 06:32:48.514] has:Object 'Kind' is missing
I0917 06:32:48.598] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:32:48.899] (Breplicationcontroller/busybox0 created
I0917 06:32:48.909] replicationcontroller/busybox1 created
W0917 06:32:49.010] kubectl convert is DEPRECATED and will be removed in a future version.
W0917 06:32:49.011] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0917 06:32:49.012] E0917 06:32:47.423356   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:49.013] E0917 06:32:47.532830   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:49.014] E0917 06:32:47.642211   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:49.015] E0917 06:32:47.752701   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:49.015] I0917 06:32:48.113339   52779 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0917 06:32:49.016] E0917 06:32:48.425014   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:49.017] E0917 06:32:48.534038   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:49.018] E0917 06:32:48.643634   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:49.019] E0917 06:32:48.755130   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:49.021] I0917 06:32:48.905955   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568701964-27133", Name:"busybox0", UID:"8a297eae-3f1c-45d2-8250-ef21850268d6", APIVersion:"v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-sbhgj
W0917 06:32:49.021] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0917 06:32:49.023] I0917 06:32:48.915259   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568701964-27133", Name:"busybox1", UID:"c69dad39-2798-44da-bb08-8c210197abeb", APIVersion:"v1", ResourceVersion:"985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qbx69
I0917 06:32:49.137] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:49.322] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:49.514] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0917 06:32:49.685] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0917 06:32:49.991] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0917 06:32:50.148] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0917 06:32:50.153] (BSuccessful
I0917 06:32:50.154] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0917 06:32:50.154] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0917 06:32:50.155] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:50.156] has:Object 'Kind' is missing
W0917 06:32:50.257] E0917 06:32:49.428126   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:50.258] E0917 06:32:49.537169   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:50.259] E0917 06:32:49.646550   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:50.260] E0917 06:32:49.758118   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:32:50.361] horizontalpodautoscaler.autoscaling "busybox0" deleted
W0917 06:32:50.463] E0917 06:32:50.431132   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:50.541] E0917 06:32:50.540368   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:32:50.643] horizontalpodautoscaler.autoscaling "busybox1" deleted
W0917 06:32:50.744] E0917 06:32:50.649983   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:50.762] E0917 06:32:50.761374   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:32:50.864] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:51.043] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0917 06:32:51.242] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0917 06:32:51.669] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0917 06:32:51.872] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0917 06:32:51.877] (BSuccessful
I0917 06:32:51.878] message:service/busybox0 exposed
I0917 06:32:51.878] service/busybox1 exposed
I0917 06:32:51.880] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:51.881] has:Object 'Kind' is missing
W0917 06:32:51.983] E0917 06:32:51.433738   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:51.984] E0917 06:32:51.542881   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:51.985] E0917 06:32:51.652959   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:51.986] E0917 06:32:51.763655   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:32:52.086] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:52.234] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0917 06:32:52.402] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0917 06:32:52.815] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0917 06:32:53.031] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0917 06:32:53.036] (BSuccessful
I0917 06:32:53.037] message:replicationcontroller/busybox0 scaled
I0917 06:32:53.038] replicationcontroller/busybox1 scaled
I0917 06:32:53.039] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:53.040] has:Object 'Kind' is missing
W0917 06:32:53.141] E0917 06:32:52.436026   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:53.142] E0917 06:32:52.545249   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:53.143] I0917 06:32:52.575194   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568701964-27133", Name:"busybox0", UID:"8a297eae-3f1c-45d2-8250-ef21850268d6", APIVersion:"v1", ResourceVersion:"1005", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-s5xsq
W0917 06:32:53.144] I0917 06:32:52.591872   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568701964-27133", Name:"busybox1", UID:"c69dad39-2798-44da-bb08-8c210197abeb", APIVersion:"v1", ResourceVersion:"1010", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-chp9q
W0917 06:32:53.144] E0917 06:32:52.656217   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:53.145] E0917 06:32:52.766152   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:32:53.246] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:53.468] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:32:53.471] (BSuccessful
I0917 06:32:53.471] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0917 06:32:53.471] replicationcontroller "busybox0" force deleted
I0917 06:32:53.472] replicationcontroller "busybox1" force deleted
I0917 06:32:53.472] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:53.472] has:Object 'Kind' is missing
I0917 06:32:53.569] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:32:53.723] (Bdeployment.apps/nginx1-deployment created
I0917 06:32:53.727] deployment.apps/nginx0-deployment created
W0917 06:32:53.828] E0917 06:32:53.437218   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:53.829] E0917 06:32:53.546620   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:53.829] E0917 06:32:53.657838   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:53.830] I0917 06:32:53.726419   52779 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568701964-27133", Name:"nginx1-deployment", UID:"03564193-ce3c-41a6-a577-9f6ee7199a23", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W0917 06:32:53.830] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0917 06:32:53.831] I0917 06:32:53.731115   52779 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568701964-27133", Name:"nginx0-deployment", UID:"d7149a5e-e35d-44e1-9d85-a46ef0172bf9", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W0917 06:32:53.831] I0917 06:32:53.731711   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701964-27133", Name:"nginx1-deployment-7bdbbfb5cf", UID:"53b853da-9532-4e2e-bd72-83d98691385a", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-s2v98
W0917 06:32:53.832] I0917 06:32:53.734363   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701964-27133", Name:"nginx0-deployment-57c6bff7f6", UID:"c2e59042-eb72-4d2c-acd6-2a74d196e278", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-knhzg
W0917 06:32:53.833] I0917 06:32:53.735253   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701964-27133", Name:"nginx1-deployment-7bdbbfb5cf", UID:"53b853da-9532-4e2e-bd72-83d98691385a", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-hf4zg
W0917 06:32:53.833] I0917 06:32:53.738881   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568701964-27133", Name:"nginx0-deployment-57c6bff7f6", UID:"c2e59042-eb72-4d2c-acd6-2a74d196e278", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-czhjr
W0917 06:32:53.833] E0917 06:32:53.767329   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:32:53.934] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0917 06:32:53.951] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0917 06:32:54.160] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0917 06:32:54.163] (BSuccessful
I0917 06:32:54.163] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0917 06:32:54.164] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0917 06:32:54.164] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0917 06:32:54.164] has:Object 'Kind' is missing
I0917 06:32:54.255] deployment.apps/nginx1-deployment paused
I0917 06:32:54.261] deployment.apps/nginx0-deployment paused
I0917 06:32:54.370] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0917 06:32:54.374] (BSuccessful
I0917 06:32:54.374] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0917 06:32:54.374] has:Object 'Kind' is missing
I0917 06:32:54.467] deployment.apps/nginx1-deployment resumed
I0917 06:32:54.473] deployment.apps/nginx0-deployment resumed
W0917 06:32:54.574] E0917 06:32:54.438610   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:54.574] E0917 06:32:54.548012   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:54.659] E0917 06:32:54.659200   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:32:54.760] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0917 06:32:54.760] (BSuccessful
I0917 06:32:54.761] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0917 06:32:54.761] has:Object 'Kind' is missing
I0917 06:32:54.761] Successful
I0917 06:32:54.761] message:deployment.apps/nginx1-deployment 
I0917 06:32:54.762] REVISION  CHANGE-CAUSE
I0917 06:32:54.762] 1         <none>
I0917 06:32:54.762] 
I0917 06:32:54.762] deployment.apps/nginx0-deployment 
I0917 06:32:54.762] REVISION  CHANGE-CAUSE
I0917 06:32:54.762] 1         <none>
I0917 06:32:54.762] 
I0917 06:32:54.763] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0917 06:32:54.763] has:nginx0-deployment
I0917 06:32:54.763] Successful
I0917 06:32:54.763] message:deployment.apps/nginx1-deployment 
I0917 06:32:54.763] REVISION  CHANGE-CAUSE
I0917 06:32:54.763] 1         <none>
I0917 06:32:54.763] 
I0917 06:32:54.764] deployment.apps/nginx0-deployment 
I0917 06:32:54.764] REVISION  CHANGE-CAUSE
I0917 06:32:54.764] 1         <none>
I0917 06:32:54.764] 
I0917 06:32:54.764] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0917 06:32:54.765] has:nginx1-deployment
I0917 06:32:54.765] Successful
I0917 06:32:54.765] message:deployment.apps/nginx1-deployment 
I0917 06:32:54.765] REVISION  CHANGE-CAUSE
I0917 06:32:54.765] 1         <none>
I0917 06:32:54.765] 
I0917 06:32:54.765] deployment.apps/nginx0-deployment 
I0917 06:32:54.765] REVISION  CHANGE-CAUSE
I0917 06:32:54.765] 1         <none>
I0917 06:32:54.766] 
I0917 06:32:54.766] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0917 06:32:54.766] has:Object 'Kind' is missing
I0917 06:32:54.786] deployment.apps "nginx1-deployment" force deleted
I0917 06:32:54.791] deployment.apps "nginx0-deployment" force deleted
W0917 06:32:54.892] E0917 06:32:54.772059   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:54.892] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0917 06:32:54.892] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0917 06:32:55.440] E0917 06:32:55.439903   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:55.550] E0917 06:32:55.549590   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:55.661] E0917 06:32:55.660827   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:55.773] E0917 06:32:55.773242   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:32:55.893] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:32:56.049] (Breplicationcontroller/busybox0 created
I0917 06:32:56.054] replicationcontroller/busybox1 created
I0917 06:32:56.151] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0917 06:32:56.239] (BSuccessful
I0917 06:32:56.240] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0917 06:32:56.241] message:no rollbacker has been implemented for "ReplicationController"
I0917 06:32:56.241] no rollbacker has been implemented for "ReplicationController"
I0917 06:32:56.242] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:56.242] has:Object 'Kind' is missing
I0917 06:32:56.332] Successful
I0917 06:32:56.333] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:56.333] error: replicationcontrollers "busybox0" pausing is not supported
I0917 06:32:56.333] error: replicationcontrollers "busybox1" pausing is not supported
I0917 06:32:56.333] has:Object 'Kind' is missing
I0917 06:32:56.334] Successful
I0917 06:32:56.334] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:56.334] error: replicationcontrollers "busybox0" pausing is not supported
I0917 06:32:56.335] error: replicationcontrollers "busybox1" pausing is not supported
I0917 06:32:56.335] has:replicationcontrollers "busybox0" pausing is not supported
I0917 06:32:56.336] Successful
I0917 06:32:56.337] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:56.337] error: replicationcontrollers "busybox0" pausing is not supported
I0917 06:32:56.337] error: replicationcontrollers "busybox1" pausing is not supported
I0917 06:32:56.337] has:replicationcontrollers "busybox1" pausing is not supported
I0917 06:32:56.427] Successful
I0917 06:32:56.428] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:56.429] error: replicationcontrollers "busybox0" resuming is not supported
I0917 06:32:56.429] error: replicationcontrollers "busybox1" resuming is not supported
I0917 06:32:56.429] has:Object 'Kind' is missing
I0917 06:32:56.430] Successful
I0917 06:32:56.431] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:56.431] error: replicationcontrollers "busybox0" resuming is not supported
I0917 06:32:56.432] error: replicationcontrollers "busybox1" resuming is not supported
I0917 06:32:56.432] has:replicationcontrollers "busybox0" resuming is not supported
I0917 06:32:56.432] Successful
I0917 06:32:56.433] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0917 06:32:56.433] error: replicationcontrollers "busybox0" resuming is not supported
I0917 06:32:56.433] error: replicationcontrollers "busybox1" resuming is not supported
I0917 06:32:56.433] has:replicationcontrollers "busybox0" resuming is not supported
I0917 06:32:56.509] replicationcontroller "busybox0" force deleted
I0917 06:32:56.516] replicationcontroller "busybox1" force deleted
W0917 06:32:56.617] I0917 06:32:56.053135   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568701964-27133", Name:"busybox0", UID:"f9694667-78c3-4df5-93a7-6f21a9f6996b", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-c5whk
W0917 06:32:56.618] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0917 06:32:56.619] I0917 06:32:56.056930   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568701964-27133", Name:"busybox1", UID:"9c13898e-793c-4bbc-b22d-bfd27a9a6c9e", APIVersion:"v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-7m5dg
W0917 06:32:56.619] E0917 06:32:56.441354   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:56.620] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0917 06:32:56.620] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0917 06:32:56.621] E0917 06:32:56.550912   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:56.662] E0917 06:32:56.662113   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:56.775] E0917 06:32:56.774677   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:57.443] E0917 06:32:57.442861   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:32:57.544] Recording: run_namespace_tests
I0917 06:32:57.544] Running command: run_namespace_tests
I0917 06:32:57.546] 
I0917 06:32:57.549] +++ Running case: test-cmd.run_namespace_tests 
I0917 06:32:57.552] +++ working dir: /go/src/k8s.io/kubernetes
I0917 06:32:57.554] +++ command: run_namespace_tests
I0917 06:32:57.563] +++ [0917 06:32:57] Testing kubectl(v1:namespaces)
I0917 06:32:57.632] namespace/my-namespace created
I0917 06:32:57.718] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0917 06:32:57.793] (Bnamespace "my-namespace" deleted
W0917 06:32:57.894] E0917 06:32:57.552213   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:57.895] E0917 06:32:57.663964   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:57.895] E0917 06:32:57.776113   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:58.444] E0917 06:32:58.444262   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:58.554] E0917 06:32:58.553695   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:58.666] E0917 06:32:58.665299   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:58.778] E0917 06:32:58.777366   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:59.446] E0917 06:32:59.445652   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:59.555] E0917 06:32:59.555178   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:59.667] E0917 06:32:59.666660   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:32:59.779] E0917 06:32:59.778807   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:00.447] E0917 06:33:00.447033   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:00.557] E0917 06:33:00.556436   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:00.668] E0917 06:33:00.668357   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:00.780] E0917 06:33:00.780301   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:01.404] I0917 06:33:01.403528   52779 shared_informer.go:197] Waiting for caches to sync for resource quota
W0917 06:33:01.404] I0917 06:33:01.403582   52779 shared_informer.go:204] Caches are synced for resource quota 
W0917 06:33:01.449] E0917 06:33:01.448431   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:01.558] E0917 06:33:01.558242   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:01.670] E0917 06:33:01.669554   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:01.782] E0917 06:33:01.781541   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:01.813] I0917 06:33:01.812698   52779 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0917 06:33:01.814] I0917 06:33:01.813852   52779 shared_informer.go:204] Caches are synced for garbage collector 
W0917 06:33:02.450] E0917 06:33:02.449923   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:02.560] E0917 06:33:02.559893   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:02.671] E0917 06:33:02.670720   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:02.783] E0917 06:33:02.782685   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:02.891] namespace/my-namespace condition met
I0917 06:33:02.991] Successful
I0917 06:33:02.992] message:Error from server (NotFound): namespaces "my-namespace" not found
I0917 06:33:02.992] has: not found
I0917 06:33:03.068] namespace/my-namespace created
I0917 06:33:03.162] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0917 06:33:03.377] (BSuccessful
I0917 06:33:03.377] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0917 06:33:03.377] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0917 06:33:03.381] namespace "namespace-1568701922-25847" deleted
I0917 06:33:03.381] namespace "namespace-1568701923-2742" deleted
I0917 06:33:03.381] namespace "namespace-1568701925-23769" deleted
I0917 06:33:03.381] namespace "namespace-1568701926-13637" deleted
I0917 06:33:03.381] namespace "namespace-1568701963-12864" deleted
I0917 06:33:03.381] namespace "namespace-1568701964-27133" deleted
I0917 06:33:03.382] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0917 06:33:03.382] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0917 06:33:03.382] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0917 06:33:03.382] has:warning: deleting cluster-scoped resources
I0917 06:33:03.382] Successful
I0917 06:33:03.382] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0917 06:33:03.382] namespace "kube-node-lease" deleted
I0917 06:33:03.383] namespace "my-namespace" deleted
I0917 06:33:03.383] namespace "namespace-1568701829-16920" deleted
... skipping 27 lines ...
I0917 06:33:03.386] namespace "namespace-1568701922-25847" deleted
I0917 06:33:03.386] namespace "namespace-1568701923-2742" deleted
I0917 06:33:03.386] namespace "namespace-1568701925-23769" deleted
I0917 06:33:03.387] namespace "namespace-1568701926-13637" deleted
I0917 06:33:03.387] namespace "namespace-1568701963-12864" deleted
I0917 06:33:03.387] namespace "namespace-1568701964-27133" deleted
I0917 06:33:03.387] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0917 06:33:03.387] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0917 06:33:03.387] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0917 06:33:03.388] has:namespace "my-namespace" deleted
I0917 06:33:03.486] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0917 06:33:03.572] (Bnamespace/other created
I0917 06:33:03.671] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0917 06:33:03.769] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:33:03.945] (Bpod/valid-pod created
I0917 06:33:04.042] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0917 06:33:04.129] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0917 06:33:04.208] (BSuccessful
I0917 06:33:04.209] message:error: a resource cannot be retrieved by name across all namespaces
I0917 06:33:04.209] has:a resource cannot be retrieved by name across all namespaces
I0917 06:33:04.296] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0917 06:33:04.374] (Bpod "valid-pod" force deleted
I0917 06:33:04.469] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:33:04.546] (Bnamespace "other" deleted
W0917 06:33:04.647] E0917 06:33:03.451792   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:04.647] E0917 06:33:03.561363   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:04.647] E0917 06:33:03.672110   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:04.648] E0917 06:33:03.784036   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:04.648] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0917 06:33:04.648] E0917 06:33:04.453117   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:04.648] E0917 06:33:04.562728   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:04.673] E0917 06:33:04.673402   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:04.786] E0917 06:33:04.785646   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:04.820] I0917 06:33:04.819969   52779 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1568701964-27133
W0917 06:33:04.827] I0917 06:33:04.826870   52779 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1568701964-27133
W0917 06:33:05.455] E0917 06:33:05.454672   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:05.564] E0917 06:33:05.564198   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:05.675] E0917 06:33:05.674837   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:05.787] E0917 06:33:05.787011   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:06.456] E0917 06:33:06.456149   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:06.566] E0917 06:33:06.565611   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:06.676] E0917 06:33:06.676214   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:06.789] E0917 06:33:06.788480   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:07.458] E0917 06:33:07.457431   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:07.567] E0917 06:33:07.567016   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:07.678] E0917 06:33:07.677600   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:07.790] E0917 06:33:07.790010   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:08.459] E0917 06:33:08.458588   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:08.575] E0917 06:33:08.574440   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:08.680] E0917 06:33:08.679367   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:08.793] E0917 06:33:08.792399   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:09.460] E0917 06:33:09.460091   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:09.576] E0917 06:33:09.576281   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:09.677] +++ exit code: 0
I0917 06:33:09.688] Recording: run_secrets_test
I0917 06:33:09.689] Running command: run_secrets_test
I0917 06:33:09.711] 
I0917 06:33:09.713] +++ Running case: test-cmd.run_secrets_test 
I0917 06:33:09.716] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0917 06:33:11.606] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0917 06:33:11.693] (Bsecret "test-secret" deleted
I0917 06:33:11.788] secret/test-secret created
I0917 06:33:11.879] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0917 06:33:11.981] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0917 06:33:12.069] (Bsecret "test-secret" deleted
W0917 06:33:12.170] E0917 06:33:09.680634   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:12.170] E0917 06:33:09.793684   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:12.170] I0917 06:33:09.961547   68909 loader.go:375] Config loaded from file:  /tmp/tmp.HI2nkjbPt3/.kube/config
W0917 06:33:12.171] E0917 06:33:10.461166   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:12.171] E0917 06:33:10.578623   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:12.171] E0917 06:33:10.682918   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:12.171] E0917 06:33:10.794941   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:12.171] E0917 06:33:11.463030   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:12.172] E0917 06:33:11.580243   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:12.172] E0917 06:33:11.684995   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:12.172] E0917 06:33:11.796253   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:12.273] secret/secret-string-data created
I0917 06:33:12.341] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0917 06:33:12.431] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0917 06:33:12.521] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0917 06:33:12.606] (Bsecret "secret-string-data" deleted
I0917 06:33:12.710] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:33:12.875] (Bsecret "test-secret" deleted
I0917 06:33:12.973] namespace "test-secrets" deleted
W0917 06:33:13.074] E0917 06:33:12.464270   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:13.074] E0917 06:33:12.581732   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:13.075] E0917 06:33:12.686702   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:13.075] E0917 06:33:12.797456   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:13.075] I0917 06:33:12.969735   52779 namespace_controller.go:171] Namespace has been deleted my-namespace
W0917 06:33:13.429] I0917 06:33:13.428713   52779 namespace_controller.go:171] Namespace has been deleted kube-node-lease
W0917 06:33:13.444] I0917 06:33:13.443455   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701832-30661
W0917 06:33:13.445] I0917 06:33:13.444506   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701844-27874
W0917 06:33:13.446] I0917 06:33:13.445799   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701829-16920
W0917 06:33:13.459] I0917 06:33:13.459252   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701848-9886
W0917 06:33:13.460] I0917 06:33:13.460049   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701844-1421
W0917 06:33:13.466] I0917 06:33:13.466030   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701841-7081
W0917 06:33:13.467] E0917 06:33:13.467195   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:13.468] I0917 06:33:13.468230   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701835-25016
W0917 06:33:13.471] I0917 06:33:13.470669   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701848-27763
W0917 06:33:13.518] I0917 06:33:13.517892   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701849-26277
W0917 06:33:13.583] E0917 06:33:13.583137   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:13.642] I0917 06:33:13.641730   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701871-31227
W0917 06:33:13.644] I0917 06:33:13.643574   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701859-2759
W0917 06:33:13.653] I0917 06:33:13.648509   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701859-5895
W0917 06:33:13.657] I0917 06:33:13.657334   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701875-14732
W0917 06:33:13.673] I0917 06:33:13.672568   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701874-568
W0917 06:33:13.674] I0917 06:33:13.674039   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701872-31092
W0917 06:33:13.677] I0917 06:33:13.676861   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701876-5723
W0917 06:33:13.689] E0917 06:33:13.688466   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:13.689] I0917 06:33:13.688589   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701879-2498
W0917 06:33:13.695] I0917 06:33:13.694645   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701878-21214
W0917 06:33:13.756] I0917 06:33:13.756382   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701881-9581
W0917 06:33:13.799] E0917 06:33:13.798690   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:13.863] I0917 06:33:13.863344   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701902-8828
W0917 06:33:13.864] I0917 06:33:13.864007   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701884-20278
W0917 06:33:13.873] I0917 06:33:13.872737   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701903-30528
W0917 06:33:13.903] I0917 06:33:13.902481   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701903-3465
W0917 06:33:13.917] I0917 06:33:13.916640   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701918-28079
W0917 06:33:13.920] I0917 06:33:13.920336   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701912-23791
... skipping 3 lines ...
W0917 06:33:13.963] I0917 06:33:13.963343   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701918-2274
W0917 06:33:14.030] I0917 06:33:14.029965   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701923-2742
W0917 06:33:14.036] I0917 06:33:14.035575   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701926-13637
W0917 06:33:14.037] I0917 06:33:14.037467   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701925-23769
W0917 06:33:14.056] I0917 06:33:14.055900   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701963-12864
W0917 06:33:14.105] I0917 06:33:14.105176   52779 namespace_controller.go:171] Namespace has been deleted namespace-1568701964-27133
W0917 06:33:14.469] E0917 06:33:14.468428   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:14.585] E0917 06:33:14.584325   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:14.630] I0917 06:33:14.629963   52779 namespace_controller.go:171] Namespace has been deleted other
W0917 06:33:14.690] E0917 06:33:14.689968   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:14.800] E0917 06:33:14.800309   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:15.470] E0917 06:33:15.469947   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:15.586] E0917 06:33:15.585573   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:15.692] E0917 06:33:15.691322   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:15.803] E0917 06:33:15.802604   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:16.471] E0917 06:33:16.471183   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:16.587] E0917 06:33:16.587135   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:16.693] E0917 06:33:16.692722   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:16.804] E0917 06:33:16.804031   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:17.473] E0917 06:33:17.472486   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:17.589] E0917 06:33:17.588563   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:17.694] E0917 06:33:17.694150   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:17.806] E0917 06:33:17.805359   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:18.084] +++ exit code: 0
I0917 06:33:18.119] Recording: run_configmap_tests
I0917 06:33:18.120] Running command: run_configmap_tests
I0917 06:33:18.145] 
I0917 06:33:18.147] +++ Running case: test-cmd.run_configmap_tests 
I0917 06:33:18.150] +++ working dir: /go/src/k8s.io/kubernetes
I0917 06:33:18.153] +++ command: run_configmap_tests
I0917 06:33:18.164] +++ [0917 06:33:18] Creating namespace namespace-1568701998-26776
I0917 06:33:18.237] namespace/namespace-1568701998-26776 created
I0917 06:33:18.307] Context "test" modified.
I0917 06:33:18.314] +++ [0917 06:33:18] Testing configmaps
W0917 06:33:18.474] E0917 06:33:18.473832   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:18.575] configmap/test-configmap created
I0917 06:33:18.610] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0917 06:33:18.693] (Bconfigmap "test-configmap" deleted
I0917 06:33:18.791] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0917 06:33:18.865] (Bnamespace/test-configmaps created
I0917 06:33:18.957] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0917 06:33:19.289] configmap/test-binary-configmap created
I0917 06:33:19.384] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0917 06:33:19.473] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0917 06:33:19.718] (Bconfigmap "test-configmap" deleted
I0917 06:33:19.803] configmap "test-binary-configmap" deleted
I0917 06:33:19.885] namespace "test-configmaps" deleted
W0917 06:33:19.986] E0917 06:33:18.589903   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:19.986] E0917 06:33:18.695372   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:19.987] E0917 06:33:18.806847   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:19.987] E0917 06:33:19.475397   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:19.987] E0917 06:33:19.591728   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:19.987] E0917 06:33:19.696647   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:19.988] E0917 06:33:19.807941   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:20.477] E0917 06:33:20.477145   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:20.594] E0917 06:33:20.593444   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:20.698] E0917 06:33:20.698017   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:20.810] E0917 06:33:20.809637   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:21.479] E0917 06:33:21.478409   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:21.595] E0917 06:33:21.595058   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:21.700] E0917 06:33:21.699590   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:21.812] E0917 06:33:21.811226   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:22.480] E0917 06:33:22.479681   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:22.596] E0917 06:33:22.595960   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:22.701] E0917 06:33:22.700950   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:22.813] E0917 06:33:22.812551   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:23.059] I0917 06:33:23.058603   52779 namespace_controller.go:171] Namespace has been deleted test-secrets
W0917 06:33:23.481] E0917 06:33:23.481052   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:23.597] E0917 06:33:23.597228   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:23.702] E0917 06:33:23.702354   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:23.815] E0917 06:33:23.814471   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:24.483] E0917 06:33:24.482445   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:24.598] E0917 06:33:24.598191   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:24.704] E0917 06:33:24.703615   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:24.816] E0917 06:33:24.815678   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:25.020] +++ exit code: 0
I0917 06:33:25.053] Recording: run_client_config_tests
I0917 06:33:25.054] Running command: run_client_config_tests
I0917 06:33:25.076] 
I0917 06:33:25.079] +++ Running case: test-cmd.run_client_config_tests 
I0917 06:33:25.082] +++ working dir: /go/src/k8s.io/kubernetes
I0917 06:33:25.084] +++ command: run_client_config_tests
I0917 06:33:25.096] +++ [0917 06:33:25] Creating namespace namespace-1568702005-20972
I0917 06:33:25.166] namespace/namespace-1568702005-20972 created
I0917 06:33:25.234] Context "test" modified.
I0917 06:33:25.241] +++ [0917 06:33:25] Testing client config
I0917 06:33:25.311] Successful
I0917 06:33:25.312] message:error: stat missing: no such file or directory
I0917 06:33:25.312] has:missing: no such file or directory
I0917 06:33:25.379] Successful
I0917 06:33:25.380] message:error: stat missing: no such file or directory
I0917 06:33:25.380] has:missing: no such file or directory
I0917 06:33:25.450] Successful
I0917 06:33:25.450] message:error: stat missing: no such file or directory
I0917 06:33:25.451] has:missing: no such file or directory
I0917 06:33:25.518] Successful
I0917 06:33:25.519] message:Error in configuration: context was not found for specified context: missing-context
I0917 06:33:25.519] has:context was not found for specified context: missing-context
I0917 06:33:25.589] Successful
I0917 06:33:25.589] message:error: no server found for cluster "missing-cluster"
I0917 06:33:25.589] has:no server found for cluster "missing-cluster"
I0917 06:33:25.657] Successful
I0917 06:33:25.657] message:error: auth info "missing-user" does not exist
I0917 06:33:25.657] has:auth info "missing-user" does not exist
W0917 06:33:25.758] E0917 06:33:25.483881   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:25.758] E0917 06:33:25.600557   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:25.759] E0917 06:33:25.705030   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:25.817] E0917 06:33:25.816984   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:25.918] Successful
I0917 06:33:25.918] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0917 06:33:25.919] has:error loading config file
I0917 06:33:25.919] Successful
I0917 06:33:25.919] message:error: stat missing-config: no such file or directory
I0917 06:33:25.919] has:no such file or directory
I0917 06:33:25.919] +++ exit code: 0
I0917 06:33:25.919] Recording: run_service_accounts_tests
I0917 06:33:25.919] Running command: run_service_accounts_tests
I0917 06:33:25.935] 
I0917 06:33:25.938] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0917 06:33:26.262] (Bnamespace/test-service-accounts created
I0917 06:33:26.352] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0917 06:33:26.423] (Bserviceaccount/test-service-account created
I0917 06:33:26.512] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0917 06:33:26.584] (Bserviceaccount "test-service-account" deleted
I0917 06:33:26.665] namespace "test-service-accounts" deleted
W0917 06:33:26.766] E0917 06:33:26.485169   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:26.766] E0917 06:33:26.601919   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:26.767] E0917 06:33:26.706511   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:26.819] E0917 06:33:26.818332   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:27.487] E0917 06:33:27.486544   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:27.604] E0917 06:33:27.603524   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:27.708] E0917 06:33:27.707814   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:27.820] E0917 06:33:27.819861   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:28.488] E0917 06:33:28.487989   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:28.605] E0917 06:33:28.605068   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:28.709] E0917 06:33:28.709275   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:28.821] E0917 06:33:28.821150   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:29.490] E0917 06:33:29.489606   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:29.607] E0917 06:33:29.606435   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:29.711] E0917 06:33:29.710635   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:29.824] E0917 06:33:29.823337   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:29.997] I0917 06:33:29.996687   52779 namespace_controller.go:171] Namespace has been deleted test-configmaps
W0917 06:33:30.491] E0917 06:33:30.490915   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:30.608] E0917 06:33:30.607717   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:30.712] E0917 06:33:30.712048   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:30.825] E0917 06:33:30.824724   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:31.493] E0917 06:33:31.492703   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:31.609] E0917 06:33:31.608638   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:31.713] E0917 06:33:31.713312   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:31.814] +++ exit code: 0
I0917 06:33:31.815] Recording: run_job_tests
I0917 06:33:31.815] Running command: run_job_tests
I0917 06:33:31.848] 
I0917 06:33:31.850] +++ Running case: test-cmd.run_job_tests 
I0917 06:33:31.853] +++ working dir: /go/src/k8s.io/kubernetes
I0917 06:33:31.855] +++ command: run_job_tests
I0917 06:33:31.867] +++ [0917 06:33:31] Creating namespace namespace-1568702011-28499
I0917 06:33:31.949] namespace/namespace-1568702011-28499 created
I0917 06:33:32.032] Context "test" modified.
I0917 06:33:32.039] +++ [0917 06:33:32] Testing job
W0917 06:33:32.140] E0917 06:33:31.828064   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:32.241] batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
I0917 06:33:32.241] (Bnamespace/test-jobs created
I0917 06:33:32.326] batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
I0917 06:33:32.417] (Bcronjob.batch/pi created
I0917 06:33:32.515] batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
I0917 06:33:32.590] (BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
I0917 06:33:32.590] pi     59 23 31 2 *   False     0        <none>          0s
W0917 06:33:32.691] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0917 06:33:32.691] E0917 06:33:32.494220   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:32.692] E0917 06:33:32.610218   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:32.715] E0917 06:33:32.714751   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:32.816] Name:                          pi
I0917 06:33:32.816] Namespace:                     test-jobs
I0917 06:33:32.816] Labels:                        run=pi
I0917 06:33:32.816] Annotations:                   <none>
I0917 06:33:32.817] Schedule:                      59 23 31 2 *
I0917 06:33:32.817] Concurrency Policy:            Allow
I0917 06:33:32.817] Suspend:                       False
I0917 06:33:32.817] Successful Job History Limit:  3
I0917 06:33:32.817] Failed Job History Limit:      1
I0917 06:33:32.817] Starting Deadline Seconds:     <unset>
I0917 06:33:32.818] Selector:                      <unset>
I0917 06:33:32.818] Parallelism:                   <unset>
I0917 06:33:32.818] Completions:                   <unset>
I0917 06:33:32.818] Pod Template:
I0917 06:33:32.818]   Labels:  run=pi
... skipping 32 lines ...
I0917 06:33:33.262]                 run=pi
I0917 06:33:33.262] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0917 06:33:33.262] Controlled By:  CronJob/pi
I0917 06:33:33.262] Parallelism:    1
I0917 06:33:33.262] Completions:    1
I0917 06:33:33.262] Start Time:     Tue, 17 Sep 2019 06:33:32 +0000
I0917 06:33:33.262] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0917 06:33:33.262] Pod Template:
I0917 06:33:33.262]   Labels:  controller-uid=da46a52c-df78-4966-bbcb-b0bc9bb9ecba
I0917 06:33:33.262]            job-name=test-job
I0917 06:33:33.263]            run=pi
I0917 06:33:33.263]   Containers:
I0917 06:33:33.263]    pi:
... skipping 15 lines ...
I0917 06:33:33.264]   Type    Reason            Age   From            Message
I0917 06:33:33.264]   ----    ------            ----  ----            -------
I0917 06:33:33.264]   Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-pqs2w
I0917 06:33:33.347] job.batch "test-job" deleted
I0917 06:33:33.443] cronjob.batch "pi" deleted
I0917 06:33:33.530] namespace "test-jobs" deleted
W0917 06:33:33.631] E0917 06:33:32.833006   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:33.631] I0917 06:33:32.982409   52779 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"da46a52c-df78-4966-bbcb-b0bc9bb9ecba", APIVersion:"batch/v1", ResourceVersion:"1397", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pqs2w
W0917 06:33:33.632] E0917 06:33:33.495475   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:33.632] E0917 06:33:33.611709   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:33.716] E0917 06:33:33.716355   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:33.835] E0917 06:33:33.835058   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:34.498] E0917 06:33:34.497310   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:34.613] E0917 06:33:34.613109   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:34.718] E0917 06:33:34.717695   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:34.837] E0917 06:33:34.836437   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:35.499] E0917 06:33:35.498947   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:35.615] E0917 06:33:35.615394   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:35.719] E0917 06:33:35.719029   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:35.838] E0917 06:33:35.837697   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:36.501] E0917 06:33:36.500378   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:36.617] E0917 06:33:36.617286   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:36.720] E0917 06:33:36.720296   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:36.751] I0917 06:33:36.751242   52779 namespace_controller.go:171] Namespace has been deleted test-service-accounts
W0917 06:33:36.839] E0917 06:33:36.838960   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:37.502] E0917 06:33:37.501886   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:37.619] E0917 06:33:37.618605   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:37.722] E0917 06:33:37.721893   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:37.840] E0917 06:33:37.840309   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:38.504] E0917 06:33:38.503440   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:38.620] E0917 06:33:38.619519   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:38.720] +++ exit code: 0
I0917 06:33:38.721] Recording: run_create_job_tests
I0917 06:33:38.721] Running command: run_create_job_tests
I0917 06:33:38.721] 
I0917 06:33:38.721] +++ Running case: test-cmd.run_create_job_tests 
I0917 06:33:38.721] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 27 lines ...
I0917 06:33:40.001] +++ [0917 06:33:39] Testing pod templates
I0917 06:33:40.086] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:33:40.246] (Bpodtemplate/nginx created
I0917 06:33:40.340] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0917 06:33:40.413] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0917 06:33:40.414] nginx   nginx        nginx    name=nginx
W0917 06:33:40.515] E0917 06:33:38.723123   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:40.515] E0917 06:33:38.841528   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:40.516] I0917 06:33:38.933305   52779 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568702018-9798", Name:"test-job", UID:"5978829b-67cf-47db-af8a-97af6737ebc1", APIVersion:"batch/v1", ResourceVersion:"1416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-rfr7s
W0917 06:33:40.516] I0917 06:33:39.186420   52779 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568702018-9798", Name:"test-job-pi", UID:"9209b1d2-da93-4f04-a150-aab7248cbf74", APIVersion:"batch/v1", ResourceVersion:"1423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-kqsrm
W0917 06:33:40.516] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0917 06:33:40.517] E0917 06:33:39.504692   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:40.517] I0917 06:33:39.520618   52779 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568702018-9798", Name:"my-pi", UID:"28abf9a0-5185-45db-be8f-3f013153f94c", APIVersion:"batch/v1", ResourceVersion:"1431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-qzp94
W0917 06:33:40.518] E0917 06:33:39.620751   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:40.518] E0917 06:33:39.724370   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:40.518] E0917 06:33:39.843089   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:40.519] I0917 06:33:40.244310   49234 controller.go:606] quota admission added evaluator for: podtemplates
W0917 06:33:40.519] E0917 06:33:40.506332   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:40.619] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0917 06:33:40.669] (Bpodtemplate "nginx" deleted
I0917 06:33:40.761] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:33:40.775] (B+++ exit code: 0
I0917 06:33:40.809] Recording: run_service_tests
I0917 06:33:40.809] Running command: run_service_tests
... skipping 65 lines ...
I0917 06:33:41.732] Port:              <unset>  6379/TCP
I0917 06:33:41.732] TargetPort:        6379/TCP
I0917 06:33:41.732] Endpoints:         <none>
I0917 06:33:41.732] Session Affinity:  None
I0917 06:33:41.732] Events:            <none>
I0917 06:33:41.733] (B
W0917 06:33:41.833] E0917 06:33:40.621977   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:41.834] E0917 06:33:40.725682   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:41.834] E0917 06:33:40.844539   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:41.834] E0917 06:33:41.512330   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:41.835] E0917 06:33:41.623189   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:41.836] E0917 06:33:41.727018   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:41.846] E0917 06:33:41.845958   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:41.947] Successful describe services:
I0917 06:33:41.947] Name:              kubernetes
I0917 06:33:41.947] Namespace:         default
I0917 06:33:41.948] Labels:            component=apiserver
I0917 06:33:41.948]                    provider=kubernetes
I0917 06:33:41.948] Annotations:       <none>
... skipping 178 lines ...
I0917 06:33:42.961]   selector:
I0917 06:33:42.961]     role: padawan
I0917 06:33:42.961]   sessionAffinity: None
I0917 06:33:42.961]   type: ClusterIP
I0917 06:33:42.961] status:
I0917 06:33:42.961]   loadBalancer: {}
W0917 06:33:43.063] E0917 06:33:42.513912   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:43.063] E0917 06:33:42.624611   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:43.064] E0917 06:33:42.728737   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:43.064] E0917 06:33:42.847590   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:43.064] error: you must specify resources by --filename when --local is set.
W0917 06:33:43.064] Example resource specifications include:
W0917 06:33:43.064]    '-f rsrc.yaml'
W0917 06:33:43.064]    '--filename=rsrc.json'
I0917 06:33:43.165] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0917 06:33:43.314] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0917 06:33:43.401] (Bservice "redis-master" deleted
I0917 06:33:43.508] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0917 06:33:43.609] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0917 06:33:43.775] (Bservice/redis-master created
W0917 06:33:43.876] E0917 06:33:43.515642   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:43.876] I0917 06:33:43.611837   52779 namespace_controller.go:171] Namespace has been deleted test-jobs
W0917 06:33:43.876] E0917 06:33:43.625536   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:43.877] E0917 06:33:43.730252   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:43.877] E0917 06:33:43.848944   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:43.977] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0917 06:33:43.998] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0917 06:33:44.162] (Bservice/service-v1-test created
I0917 06:33:44.262] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0917 06:33:44.424] (Bservice/service-v1-test replaced
I0917 06:33:44.524] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0917 06:33:44.610] (Bservice "redis-master" deleted
I0917 06:33:44.699] service "service-v1-test" deleted
I0917 06:33:44.792] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0917 06:33:44.882] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0917 06:33:45.043] (Bservice/redis-master created
W0917 06:33:45.144] E0917 06:33:44.517086   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:45.145] E0917 06:33:44.627450   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:45.145] E0917 06:33:44.731958   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:45.145] E0917 06:33:44.850474   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:45.245] service/redis-slave created
I0917 06:33:45.300] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0917 06:33:45.389] (BSuccessful
I0917 06:33:45.389] message:NAME           RSRC
I0917 06:33:45.390] kubernetes     145
I0917 06:33:45.390] redis-master   1467
... skipping 84 lines ...
I0917 06:33:50.248] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0917 06:33:50.337] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0917 06:33:50.429] (Bdaemonset.apps/bind rolled back
I0917 06:33:50.532] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0917 06:33:50.620] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0917 06:33:50.727] (BSuccessful
I0917 06:33:50.727] message:error: unable to find specified revision 1000000 in history
I0917 06:33:50.727] has:unable to find specified revision
I0917 06:33:50.818] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0917 06:33:50.907] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0917 06:33:51.007] (Bdaemonset.apps/bind rolled back
I0917 06:33:51.106] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0917 06:33:51.194] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0917 06:33:52.684] Namespace:    namespace-1568702031-1758
I0917 06:33:52.684] Selector:     app=guestbook,tier=frontend
I0917 06:33:52.685] Labels:       app=guestbook
I0917 06:33:52.685]               tier=frontend
I0917 06:33:52.685] Annotations:  <none>
I0917 06:33:52.685] Replicas:     3 current / 3 desired
I0917 06:33:52.685] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0917 06:33:52.685] Pod Template:
I0917 06:33:52.685]   Labels:  app=guestbook
I0917 06:33:52.685]            tier=frontend
I0917 06:33:52.685]   Containers:
I0917 06:33:52.685]    php-redis:
I0917 06:33:52.685]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0917 06:33:52.802] Namespace:    namespace-1568702031-1758
I0917 06:33:52.803] Selector:     app=guestbook,tier=frontend
I0917 06:33:52.803] Labels:       app=guestbook
I0917 06:33:52.803]               tier=frontend
I0917 06:33:52.803] Annotations:  <none>
I0917 06:33:52.803] Replicas:     3 current / 3 desired
I0917 06:33:52.803] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0917 06:33:52.803] Pod Template:
I0917 06:33:52.803]   Labels:  app=guestbook
I0917 06:33:52.803]            tier=frontend
I0917 06:33:52.804]   Containers:
I0917 06:33:52.804]    php-redis:
I0917 06:33:52.804]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0917 06:33:52.805]   Type    Reason            Age   From                    Message
I0917 06:33:52.805]   ----    ------            ----  ----                    -------
I0917 06:33:52.805]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-vlvmm
I0917 06:33:52.805]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-mkxff
I0917 06:33:52.805]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-dsw26
I0917 06:33:52.805] (B
W0917 06:33:52.906] E0917 06:33:45.518793   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.906] E0917 06:33:45.628992   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.906] E0917 06:33:45.733354   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.906] E0917 06:33:45.852152   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.907] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0917 06:33:52.907] I0917 06:33:46.384284   52779 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"4cf74132-12e7-448b-9cf4-aada6b1da786", APIVersion:"apps/v1", ResourceVersion:"1482", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W0917 06:33:52.907] I0917 06:33:46.389062   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"040a2a8e-b25a-48fc-a2db-0894cc9af66d", APIVersion:"apps/v1", ResourceVersion:"1483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-6dpln
W0917 06:33:52.908] I0917 06:33:46.391938   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"040a2a8e-b25a-48fc-a2db-0894cc9af66d", APIVersion:"apps/v1", ResourceVersion:"1483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-mx2lb
W0917 06:33:52.908] E0917 06:33:46.520163   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.908] E0917 06:33:46.630477   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.909] E0917 06:33:46.734815   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.909] E0917 06:33:46.853556   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.909] I0917 06:33:47.474688   49234 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0917 06:33:52.909] I0917 06:33:47.484602   49234 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0917 06:33:52.910] E0917 06:33:47.521420   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.910] E0917 06:33:47.631741   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.910] E0917 06:33:47.736085   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.911] E0917 06:33:47.854833   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.911] E0917 06:33:48.522583   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.911] E0917 06:33:48.633128   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.911] E0917 06:33:48.737604   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.912] E0917 06:33:48.856409   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.912] E0917 06:33:49.523844   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.912] E0917 06:33:49.634725   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.913] E0917 06:33:49.739090   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.913] E0917 06:33:49.857958   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.913] E0917 06:33:50.525878   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.914] E0917 06:33:50.636049   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.914] E0917 06:33:50.740578   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.914] E0917 06:33:50.859561   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.919] E0917 06:33:51.019518   52779 daemon_controller.go:302] namespace-1568702028-13161/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1568702028-13161", SelfLink:"/apis/apps/v1/namespaces/namespace-1568702028-13161/daemonsets/bind", UID:"5aa44394-47b7-4f5a-a76c-11e7bdfc75c1", ResourceVersion:"1552", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63704298829, loc:(*time.Location)(0x7750f40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1568702028-13161\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001d716c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024bb038), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020b8d80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001d716e0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000f1b0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0024bb08c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0917 06:33:52.920] E0917 06:33:51.527282   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.920] E0917 06:33:51.637593   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.920] E0917 06:33:51.742013   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.921] E0917 06:33:51.860936   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.921] I0917 06:33:51.958492   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"e34871be-4cce-4162-bd23-f8e685f558e9", APIVersion:"v1", ResourceVersion:"1560", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5fgdt
W0917 06:33:52.921] I0917 06:33:51.962048   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"e34871be-4cce-4162-bd23-f8e685f558e9", APIVersion:"v1", ResourceVersion:"1560", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gwbd5
W0917 06:33:52.922] I0917 06:33:51.962747   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"e34871be-4cce-4162-bd23-f8e685f558e9", APIVersion:"v1", ResourceVersion:"1560", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8b59l
W0917 06:33:52.922] I0917 06:33:52.427725   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"cee57d3f-24d3-4ec7-bc5a-f85a3dec460d", APIVersion:"v1", ResourceVersion:"1576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vlvmm
W0917 06:33:52.923] I0917 06:33:52.431438   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"cee57d3f-24d3-4ec7-bc5a-f85a3dec460d", APIVersion:"v1", ResourceVersion:"1576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mkxff
W0917 06:33:52.923] I0917 06:33:52.431590   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"cee57d3f-24d3-4ec7-bc5a-f85a3dec460d", APIVersion:"v1", ResourceVersion:"1576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dsw26
W0917 06:33:52.923] E0917 06:33:52.529161   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.923] E0917 06:33:52.639082   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.924] E0917 06:33:52.743459   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:52.924] E0917 06:33:52.862237   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:53.024] core.sh:1065: Successful describe
I0917 06:33:53.025] Name:         frontend
I0917 06:33:53.025] Namespace:    namespace-1568702031-1758
I0917 06:33:53.025] Selector:     app=guestbook,tier=frontend
I0917 06:33:53.025] Labels:       app=guestbook
I0917 06:33:53.025]               tier=frontend
I0917 06:33:53.025] Annotations:  <none>
I0917 06:33:53.025] Replicas:     3 current / 3 desired
I0917 06:33:53.025] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0917 06:33:53.025] Pod Template:
I0917 06:33:53.025]   Labels:  app=guestbook
I0917 06:33:53.025]            tier=frontend
I0917 06:33:53.026]   Containers:
I0917 06:33:53.026]    php-redis:
I0917 06:33:53.026]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0917 06:33:53.067] Namespace:    namespace-1568702031-1758
I0917 06:33:53.067] Selector:     app=guestbook,tier=frontend
I0917 06:33:53.067] Labels:       app=guestbook
I0917 06:33:53.067]               tier=frontend
I0917 06:33:53.067] Annotations:  <none>
I0917 06:33:53.067] Replicas:     3 current / 3 desired
I0917 06:33:53.068] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0917 06:33:53.068] Pod Template:
I0917 06:33:53.068]   Labels:  app=guestbook
I0917 06:33:53.068]            tier=frontend
I0917 06:33:53.068]   Containers:
I0917 06:33:53.068]    php-redis:
I0917 06:33:53.068]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0917 06:33:53.222] Namespace:    namespace-1568702031-1758
I0917 06:33:53.223] Selector:     app=guestbook,tier=frontend
I0917 06:33:53.223] Labels:       app=guestbook
I0917 06:33:53.223]               tier=frontend
I0917 06:33:53.223] Annotations:  <none>
I0917 06:33:53.223] Replicas:     3 current / 3 desired
I0917 06:33:53.223] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0917 06:33:53.223] Pod Template:
I0917 06:33:53.224]   Labels:  app=guestbook
I0917 06:33:53.224]            tier=frontend
I0917 06:33:53.224]   Containers:
I0917 06:33:53.224]    php-redis:
I0917 06:33:53.224]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0917 06:33:53.341] Namespace:    namespace-1568702031-1758
I0917 06:33:53.341] Selector:     app=guestbook,tier=frontend
I0917 06:33:53.341] Labels:       app=guestbook
I0917 06:33:53.341]               tier=frontend
I0917 06:33:53.342] Annotations:  <none>
I0917 06:33:53.342] Replicas:     3 current / 3 desired
I0917 06:33:53.342] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0917 06:33:53.342] Pod Template:
I0917 06:33:53.342]   Labels:  app=guestbook
I0917 06:33:53.342]            tier=frontend
I0917 06:33:53.342]   Containers:
I0917 06:33:53.342]    php-redis:
I0917 06:33:53.343]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0917 06:33:53.450] Namespace:    namespace-1568702031-1758
I0917 06:33:53.450] Selector:     app=guestbook,tier=frontend
I0917 06:33:53.450] Labels:       app=guestbook
I0917 06:33:53.450]               tier=frontend
I0917 06:33:53.451] Annotations:  <none>
I0917 06:33:53.451] Replicas:     3 current / 3 desired
I0917 06:33:53.451] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0917 06:33:53.451] Pod Template:
I0917 06:33:53.451]   Labels:  app=guestbook
I0917 06:33:53.451]            tier=frontend
I0917 06:33:53.451]   Containers:
I0917 06:33:53.451]    php-redis:
I0917 06:33:53.451]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0917 06:33:53.566] Namespace:    namespace-1568702031-1758
I0917 06:33:53.566] Selector:     app=guestbook,tier=frontend
I0917 06:33:53.566] Labels:       app=guestbook
I0917 06:33:53.566]               tier=frontend
I0917 06:33:53.566] Annotations:  <none>
I0917 06:33:53.567] Replicas:     3 current / 3 desired
I0917 06:33:53.567] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0917 06:33:53.567] Pod Template:
I0917 06:33:53.567]   Labels:  app=guestbook
I0917 06:33:53.567]            tier=frontend
I0917 06:33:53.567]   Containers:
I0917 06:33:53.567]    php-redis:
I0917 06:33:53.568]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 21 lines ...
I0917 06:33:54.309] (Breplicationcontroller/frontend scaled
I0917 06:33:54.403] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0917 06:33:54.488] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0917 06:33:54.562] (Breplicationcontroller/frontend scaled
I0917 06:33:54.660] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0917 06:33:54.738] (Breplicationcontroller "frontend" deleted
W0917 06:33:54.839] E0917 06:33:53.530515   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:54.840] E0917 06:33:53.640411   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:54.840] E0917 06:33:53.745965   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:54.840] I0917 06:33:53.774653   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"cee57d3f-24d3-4ec7-bc5a-f85a3dec460d", APIVersion:"v1", ResourceVersion:"1586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-dsw26
W0917 06:33:54.841] E0917 06:33:53.863570   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:54.841] error: Expected replicas to be 3, was 2
W0917 06:33:54.842] I0917 06:33:54.312552   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"cee57d3f-24d3-4ec7-bc5a-f85a3dec460d", APIVersion:"v1", ResourceVersion:"1592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h9mn8
W0917 06:33:54.842] E0917 06:33:54.532346   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:54.842] I0917 06:33:54.566995   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"cee57d3f-24d3-4ec7-bc5a-f85a3dec460d", APIVersion:"v1", ResourceVersion:"1597", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-h9mn8
W0917 06:33:54.843] E0917 06:33:54.642318   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:54.843] E0917 06:33:54.747403   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:54.865] E0917 06:33:54.864862   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:54.917] I0917 06:33:54.916252   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"redis-master", UID:"f3102946-a39e-4f07-8caa-04095170816a", APIVersion:"v1", ResourceVersion:"1609", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-fpvj7
I0917 06:33:55.017] replicationcontroller/redis-master created
I0917 06:33:55.074] replicationcontroller/redis-slave created
I0917 06:33:55.162] replicationcontroller/redis-master scaled
I0917 06:33:55.167] replicationcontroller/redis-slave scaled
I0917 06:33:55.260] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
... skipping 4 lines ...
W0917 06:33:55.536] I0917 06:33:55.081372   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"redis-slave", UID:"73408c84-c843-4fb3-a30a-898d4a54b554", APIVersion:"v1", ResourceVersion:"1614", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-d6cfx
W0917 06:33:55.536] I0917 06:33:55.165402   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"redis-master", UID:"f3102946-a39e-4f07-8caa-04095170816a", APIVersion:"v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-xbc6q
W0917 06:33:55.536] I0917 06:33:55.168349   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"redis-master", UID:"f3102946-a39e-4f07-8caa-04095170816a", APIVersion:"v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-rpjqz
W0917 06:33:55.537] I0917 06:33:55.168872   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"redis-master", UID:"f3102946-a39e-4f07-8caa-04095170816a", APIVersion:"v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-b9rd5
W0917 06:33:55.537] I0917 06:33:55.171188   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"redis-slave", UID:"73408c84-c843-4fb3-a30a-898d4a54b554", APIVersion:"v1", ResourceVersion:"1623", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-ffccp
W0917 06:33:55.537] I0917 06:33:55.174157   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"redis-slave", UID:"73408c84-c843-4fb3-a30a-898d4a54b554", APIVersion:"v1", ResourceVersion:"1623", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-62bmj
W0917 06:33:55.538] E0917 06:33:55.533513   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:55.605] I0917 06:33:55.604512   52779 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment", UID:"4f03e1e6-a84b-47f7-b999-f4c5118eff53", APIVersion:"apps/v1", ResourceVersion:"1655", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0917 06:33:55.609] I0917 06:33:55.608287   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment-6986c7bc94", UID:"c47bb612-cbb0-492e-af83-1e3903240e6e", APIVersion:"apps/v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-klzcp
W0917 06:33:55.612] I0917 06:33:55.611361   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment-6986c7bc94", UID:"c47bb612-cbb0-492e-af83-1e3903240e6e", APIVersion:"apps/v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-st595
W0917 06:33:55.613] I0917 06:33:55.612187   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment-6986c7bc94", UID:"c47bb612-cbb0-492e-af83-1e3903240e6e", APIVersion:"apps/v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-8ljg4
W0917 06:33:55.644] E0917 06:33:55.644148   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:55.704] I0917 06:33:55.703861   52779 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment", UID:"4f03e1e6-a84b-47f7-b999-f4c5118eff53", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W0917 06:33:55.710] I0917 06:33:55.709969   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment-6986c7bc94", UID:"c47bb612-cbb0-492e-af83-1e3903240e6e", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-st595
W0917 06:33:55.711] I0917 06:33:55.710582   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment-6986c7bc94", UID:"c47bb612-cbb0-492e-af83-1e3903240e6e", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-klzcp
W0917 06:33:55.749] E0917 06:33:55.748820   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:55.850] deployment.apps/nginx-deployment created
I0917 06:33:55.850] deployment.apps/nginx-deployment scaled
I0917 06:33:55.850] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I0917 06:33:55.888] (Bdeployment.apps "nginx-deployment" deleted
I0917 06:33:55.987] Successful
I0917 06:33:55.987] message:service/expose-test-deployment exposed
I0917 06:33:55.987] has:service/expose-test-deployment exposed
I0917 06:33:56.064] service "expose-test-deployment" deleted
I0917 06:33:56.155] Successful
I0917 06:33:56.155] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0917 06:33:56.155] See 'kubectl expose -h' for help and examples
I0917 06:33:56.155] has:invalid deployment: no selectors
W0917 06:33:56.256] E0917 06:33:55.866208   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:56.307] I0917 06:33:56.307015   52779 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment", UID:"75b80762-2551-4c97-afd5-211b9c35ccf3", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0917 06:33:56.310] I0917 06:33:56.310034   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment-6986c7bc94", UID:"08cd83b8-76e4-419f-bf50-0c81f2717394", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-nfbsr
W0917 06:33:56.314] I0917 06:33:56.313399   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment-6986c7bc94", UID:"08cd83b8-76e4-419f-bf50-0c81f2717394", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-4j8gq
W0917 06:33:56.314] I0917 06:33:56.314125   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568702031-1758", Name:"nginx-deployment-6986c7bc94", UID:"08cd83b8-76e4-419f-bf50-0c81f2717394", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-2jwlz
I0917 06:33:56.415] deployment.apps/nginx-deployment created
I0917 06:33:56.415] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0917 06:33:56.487] (Bservice/nginx-deployment exposed
I0917 06:33:56.578] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0917 06:33:56.652] (Bdeployment.apps "nginx-deployment" deleted
I0917 06:33:56.659] service "nginx-deployment" deleted
W0917 06:33:56.760] E0917 06:33:56.535272   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:56.760] E0917 06:33:56.645484   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:56.761] E0917 06:33:56.750326   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:56.818] I0917 06:33:56.817459   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"17b9d8a9-fbbe-491b-ae3c-481eefa1cc13", APIVersion:"v1", ResourceVersion:"1722", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kfp2j
W0917 06:33:56.821] I0917 06:33:56.820379   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"17b9d8a9-fbbe-491b-ae3c-481eefa1cc13", APIVersion:"v1", ResourceVersion:"1722", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dhlss
W0917 06:33:56.822] I0917 06:33:56.821697   52779 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568702031-1758", Name:"frontend", UID:"17b9d8a9-fbbe-491b-ae3c-481eefa1cc13", APIVersion:"v1", ResourceVersion:"1722", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n46rj
W0917 06:33:56.868] E0917 06:33:56.867724   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0917 06:33:56.968] replicationcontroller/frontend created
I0917 06:33:56.969] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I0917 06:33:56.999] (Bservice/frontend exposed
I0917 06:33:57.087] core.sh:1161: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0917 06:33:57.172] (Bservice/frontend-2 exposed
I0917 06:33:57.262] core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
... skipping 8 lines ...
I0917 06:33:58.111] service "frontend" deleted
I0917 06:33:58.118] service "frontend-2" deleted
I0917 06:33:58.125] service "frontend-3" deleted
I0917 06:33:58.131] service "frontend-4" deleted
I0917 06:33:58.139] service "frontend-5" deleted
I0917 06:33:58.233] Successful
I0917 06:33:58.233] message:error: cannot expose a Node
I0917 06:33:58.233] has:cannot expose
I0917 06:33:58.326] Successful
I0917 06:33:58.327] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0917 06:33:58.327] has:metadata.name: Invalid value
I0917 06:33:58.415] Successful
I0917 06:33:58.415] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 7 lines ...
I0917 06:33:58.844] (Bservice "etcd-server" deleted
I0917 06:33:58.934] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0917 06:33:59.009] (Breplicationcontroller "frontend" deleted
I0917 06:33:59.102] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:33:59.189] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0917 06:33:59.341] (Breplicationcontroller/frontend created
W0917 06:33:59.442] E0917 06:33:57.536666   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:59.443] E0917 06:33:57.646858   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:59.443] E0917 06:33:57.751703   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:59.444] E0917 06:33:57.869238   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:59.444] E0917 06:33:58.538117   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:59.445] E0917 06:33:58.648134   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0917 06:33:59.445] E0917 06:33:58.752776   52779 reflector.go:121] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the