This job view page is being replaced by Spyglass soon. Check out the new job view.
PRbclau: test images: Adds Windows Container images support (part 1)
ResultFAILURE
Tests 1 failed / 2862 succeeded
Started2019-09-16 10:14
Elapsed28m44s
Revision
Buildergke-prow-ssd-pool-1a225945-d46v
Refs master:ebd8f9cc
76838:19272f63
poda2d4f09a-d86a-11e9-af7a-7ecbb7a97bb8
infra-commite1cbc3ccd
poda2d4f09a-d86a-11e9-af7a-7ecbb7a97bb8
repok8s.io/kubernetes
repo-commit060321cda7fde1ebf17b31b230bdf0b97000cded
repos{u'k8s.io/kubernetes': u'master:ebd8f9ccb5c7a7f54f636db3a8a7dc1397046be6,76838:19272f636ed39aeb081ec067427e13e058f67f6c'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0916 10:38:08.663746  108960 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0916 10:38:08.663766  108960 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0916 10:38:08.663779  108960 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0916 10:38:08.663790  108960 master.go:259] Using reconciler: 
I0916 10:38:08.667069  108960 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.667670  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.667886  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.669466  108960 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0916 10:38:08.669512  108960 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.669929  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.669956  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.670067  108960 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0916 10:38:08.671919  108960 store.go:1342] Monitoring events count at <storage-prefix>//events
I0916 10:38:08.671959  108960 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.672278  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.672305  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.672493  108960 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0916 10:38:08.674309  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.677629  108960 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0916 10:38:08.677676  108960 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.677948  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.677976  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.678072  108960 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0916 10:38:08.680571  108960 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0916 10:38:08.680778  108960 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.681048  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.681076  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.681190  108960 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0916 10:38:08.682570  108960 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0916 10:38:08.682785  108960 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.683052  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.683078  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.683187  108960 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0916 10:38:08.684948  108960 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0916 10:38:08.685149  108960 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.685588  108960 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0916 10:38:08.687170  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.687262  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.687877  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.688973  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.689005  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.689558  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.690493  108960 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0916 10:38:08.690622  108960 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0916 10:38:08.690722  108960 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.691008  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.691036  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.691501  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.693013  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.693444  108960 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0916 10:38:08.693691  108960 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.693832  108960 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0916 10:38:08.694636  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.694665  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.697778  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.699186  108960 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0916 10:38:08.699303  108960 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0916 10:38:08.700143  108960 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.700276  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.700629  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.700693  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.701505  108960 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0916 10:38:08.701655  108960 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.701849  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.701939  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.702014  108960 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0916 10:38:08.703304  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.703454  108960 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0916 10:38:08.703614  108960 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0916 10:38:08.703664  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.703948  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.703978  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.705807  108960 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0916 10:38:08.705953  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.705993  108960 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.706272  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.706297  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.706315  108960 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0916 10:38:08.707464  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.709283  108960 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0916 10:38:08.709472  108960 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0916 10:38:08.711009  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.712021  108960 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.712406  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.712539  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.713871  108960 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0916 10:38:08.714025  108960 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0916 10:38:08.715580  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.714101  108960 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.716976  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.717104  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.718501  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.718630  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.720060  108960 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.720429  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.720563  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.721800  108960 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0916 10:38:08.721962  108960 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0916 10:38:08.724672  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.725967  108960 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0916 10:38:08.726658  108960 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.727107  108960 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.728072  108960 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.728967  108960 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.729976  108960 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.730844  108960 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.731605  108960 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.731961  108960 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.732271  108960 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.733079  108960 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.733823  108960 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.734204  108960 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.735156  108960 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.735581  108960 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.736289  108960 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.736680  108960 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.737548  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.737882  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.738180  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.738458  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.738801  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.739037  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.739439  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.740240  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.740646  108960 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.741578  108960 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.742498  108960 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.742871  108960 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.743239  108960 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.744084  108960 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.744598  108960 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.745744  108960 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.746653  108960 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.747440  108960 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.748448  108960 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.748910  108960 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.749215  108960 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0916 10:38:08.749369  108960 master.go:461] Enabling API group "authentication.k8s.io".
I0916 10:38:08.749510  108960 master.go:461] Enabling API group "authorization.k8s.io".
I0916 10:38:08.749782  108960 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.750283  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.750446  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.752194  108960 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 10:38:08.752269  108960 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 10:38:08.753874  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.754104  108960 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.755609  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.755652  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.756963  108960 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 10:38:08.757026  108960 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 10:38:08.757165  108960 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.757546  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.757584  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.759101  108960 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0916 10:38:08.759129  108960 master.go:461] Enabling API group "autoscaling".
I0916 10:38:08.759312  108960 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.759476  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.759548  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.759570  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.759654  108960 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0916 10:38:08.761413  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.761518  108960 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0916 10:38:08.761706  108960 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0916 10:38:08.761704  108960 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.761943  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.761966  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.763509  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.763820  108960 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0916 10:38:08.763850  108960 master.go:461] Enabling API group "batch".
I0916 10:38:08.764057  108960 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.764308  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.764399  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.765352  108960 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0916 10:38:08.766774  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.766891  108960 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0916 10:38:08.766924  108960 master.go:461] Enabling API group "certificates.k8s.io".
I0916 10:38:08.767058  108960 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0916 10:38:08.767126  108960 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.767433  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.767464  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.768141  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.768633  108960 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0916 10:38:08.768813  108960 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0916 10:38:08.768842  108960 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.769110  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.769142  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.772373  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.772772  108960 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0916 10:38:08.772846  108960 master.go:461] Enabling API group "coordination.k8s.io".
I0916 10:38:08.772869  108960 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0916 10:38:08.773480  108960 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0916 10:38:08.773381  108960 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.774510  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.775305  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.775736  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.781264  108960 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0916 10:38:08.781382  108960 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0916 10:38:08.782681  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.783231  108960 master.go:461] Enabling API group "extensions".
I0916 10:38:08.784183  108960 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.785365  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.785433  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.789419  108960 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0916 10:38:08.789504  108960 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0916 10:38:08.791136  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.792738  108960 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.794026  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.794078  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.797500  108960 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0916 10:38:08.797550  108960 master.go:461] Enabling API group "networking.k8s.io".
I0916 10:38:08.797606  108960 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.797913  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.797954  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.798088  108960 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0916 10:38:08.799710  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.799781  108960 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0916 10:38:08.799806  108960 master.go:461] Enabling API group "node.k8s.io".
I0916 10:38:08.800024  108960 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0916 10:38:08.800005  108960 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.800313  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.800365  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.801665  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.803147  108960 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0916 10:38:08.803307  108960 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0916 10:38:08.803355  108960 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.804469  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.805509  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.805663  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.807047  108960 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0916 10:38:08.807082  108960 master.go:461] Enabling API group "policy".
I0916 10:38:08.807131  108960 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.807348  108960 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0916 10:38:08.807434  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.807642  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.808645  108960 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0916 10:38:08.809016  108960 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0916 10:38:08.809228  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.811269  108960 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.812052  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.812518  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.812630  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.817671  108960 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0916 10:38:08.817814  108960 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0916 10:38:08.818094  108960 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.818469  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.820849  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.819317  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.823439  108960 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0916 10:38:08.823685  108960 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.823938  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.823964  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.824086  108960 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0916 10:38:08.827554  108960 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0916 10:38:08.827630  108960 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.827849  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.827874  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.827920  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.827975  108960 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0916 10:38:08.829597  108960 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0916 10:38:08.829812  108960 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.830000  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.830028  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.830054  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.830126  108960 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0916 10:38:08.831710  108960 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0916 10:38:08.831763  108960 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.831909  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.831936  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.832033  108960 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0916 10:38:08.832797  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.836547  108960 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0916 10:38:08.836763  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.836877  108960 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.837186  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.837192  108960 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0916 10:38:08.837218  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.840019  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.845644  108960 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0916 10:38:08.846003  108960 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0916 10:38:08.845903  108960 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0916 10:38:08.847277  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.850535  108960 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.850984  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.851073  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.853081  108960 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0916 10:38:08.853248  108960 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0916 10:38:08.854782  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.856235  108960 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.856876  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.856973  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.858582  108960 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0916 10:38:08.858769  108960 master.go:461] Enabling API group "scheduling.k8s.io".
I0916 10:38:08.858712  108960 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0916 10:38:08.861211  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.862450  108960 master.go:450] Skipping disabled API group "settings.k8s.io".
I0916 10:38:08.862784  108960 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.863077  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.863148  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.865169  108960 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0916 10:38:08.865305  108960 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0916 10:38:08.869047  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.870502  108960 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.870818  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.870890  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.872432  108960 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0916 10:38:08.872561  108960 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0916 10:38:08.873511  108960 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.873865  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.873958  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.874286  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.876130  108960 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0916 10:38:08.876178  108960 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.876470  108960 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0916 10:38:08.876574  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.876613  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.878138  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.878473  108960 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0916 10:38:08.878677  108960 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.878875  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.878897  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.879035  108960 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0916 10:38:08.880569  108960 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0916 10:38:08.880761  108960 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.880911  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.880938  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.881050  108960 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0916 10:38:08.881230  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.882620  108960 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0916 10:38:08.882651  108960 master.go:461] Enabling API group "storage.k8s.io".
I0916 10:38:08.882849  108960 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.882939  108960 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0916 10:38:08.883064  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.883084  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.883750  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.885177  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.885671  108960 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0916 10:38:08.885739  108960 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0916 10:38:08.885877  108960 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.886016  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.886039  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.886876  108960 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0916 10:38:08.887072  108960 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.887222  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.887244  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.887364  108960 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0916 10:38:08.888075  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.888795  108960 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0916 10:38:08.888971  108960 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.889085  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.889112  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.889191  108960 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0916 10:38:08.889449  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.890593  108960 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0916 10:38:08.890770  108960 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.890937  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.890957  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.891038  108960 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0916 10:38:08.891536  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.892989  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.893421  108960 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0916 10:38:08.893446  108960 master.go:461] Enabling API group "apps".
I0916 10:38:08.893488  108960 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.893636  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.893658  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.893733  108960 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0916 10:38:08.895272  108960 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0916 10:38:08.895364  108960 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.895532  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.895560  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.895641  108960 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0916 10:38:08.896953  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.897731  108960 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0916 10:38:08.897778  108960 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.897840  108960 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0916 10:38:08.897910  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.897928  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.899105  108960 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0916 10:38:08.899158  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.899260  108960 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0916 10:38:08.899145  108960 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.899418  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.899440  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.900683  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.900712  108960 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0916 10:38:08.900733  108960 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0916 10:38:08.900766  108960 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.900939  108960 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0916 10:38:08.901109  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:08.901129  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:08.902225  108960 store.go:1342] Monitoring events count at <storage-prefix>//events
I0916 10:38:08.902242  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.902253  108960 master.go:461] Enabling API group "events.k8s.io".
I0916 10:38:08.902467  108960 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0916 10:38:08.902541  108960 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.902719  108960 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.902967  108960 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.903083  108960 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.903184  108960 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.903273  108960 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.903506  108960 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.903594  108960 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.903631  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.903687  108960 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.903776  108960 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.904667  108960 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.904915  108960 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.905817  108960 watch_cache.go:405] Replace watchCache (rev: 30440) 
I0916 10:38:08.906646  108960 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.906919  108960 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.907965  108960 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.908268  108960 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.909148  108960 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.909541  108960 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.910486  108960 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.910793  108960 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:38:08.910875  108960 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0916 10:38:08.911683  108960 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.916730  108960 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.917033  108960 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.917938  108960 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.918693  108960 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.919595  108960 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.919847  108960 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.920838  108960 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.921722  108960 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.921969  108960 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.922746  108960 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:38:08.922843  108960 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0916 10:38:08.923787  108960 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.924048  108960 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.924653  108960 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.925411  108960 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.925886  108960 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.926584  108960 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.927350  108960 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.927925  108960 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.928458  108960 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.929188  108960 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.929826  108960 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:38:08.929962  108960 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0916 10:38:08.930667  108960 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.931199  108960 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:38:08.931264  108960 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0916 10:38:08.932041  108960 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.932561  108960 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.932784  108960 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.933372  108960 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.933912  108960 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.934465  108960 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.935059  108960 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:38:08.935124  108960 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0916 10:38:08.936137  108960 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.937008  108960 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.937273  108960 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.938264  108960 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.938609  108960 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.938856  108960 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.939722  108960 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.940013  108960 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.940263  108960 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.941286  108960 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.941778  108960 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.942138  108960 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0916 10:38:08.942240  108960 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0916 10:38:08.942257  108960 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0916 10:38:08.943148  108960 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.943956  108960 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.944888  108960 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.945747  108960 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.946767  108960 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a4b4c0cf-8ef0-435c-88e3-7490108e6603", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0916 10:38:08.952218  108960 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.414096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41994]
I0916 10:38:08.952602  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:08.952623  108960 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0916 10:38:08.952634  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:08.952646  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:08.952655  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:08.952662  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:08.952685  108960 httplog.go:90] GET /healthz: (188.921µs) 0 [Go-http-client/1.1 127.0.0.1:41994]
I0916 10:38:08.955983  108960 httplog.go:90] GET /api/v1/services: (1.527833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41996]
I0916 10:38:08.960367  108960 httplog.go:90] GET /api/v1/services: (1.303852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41996]
I0916 10:38:08.962861  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:08.962892  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:08.962905  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:08.962915  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:08.962924  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:08.962950  108960 httplog.go:90] GET /healthz: (198.228µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41994]
I0916 10:38:08.965808  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.770913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41996]
I0916 10:38:08.965891  108960 httplog.go:90] GET /api/v1/services: (1.696122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41994]
I0916 10:38:08.966101  108960 httplog.go:90] GET /api/v1/services: (1.875537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:08.968621  108960 httplog.go:90] POST /api/v1/namespaces: (2.40624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41996]
I0916 10:38:08.970536  108960 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.494946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:08.972703  108960 httplog.go:90] POST /api/v1/namespaces: (1.695338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:08.974148  108960 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.155438ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:08.976228  108960 httplog.go:90] POST /api/v1/namespaces: (1.734848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.053477  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.053516  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.053529  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.053539  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.053547  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.053580  108960 httplog.go:90] GET /healthz: (292.52µs) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.064689  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.064728  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.064741  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.064751  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.064760  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.064793  108960 httplog.go:90] GET /healthz: (272.877µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.153477  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.153516  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.153530  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.153540  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.153548  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.153583  108960 httplog.go:90] GET /healthz: (287.753µs) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.164692  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.164733  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.164747  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.164756  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.164765  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.164797  108960 httplog.go:90] GET /healthz: (271.522µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.253434  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.253475  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.253488  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.253498  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.253506  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.253538  108960 httplog.go:90] GET /healthz: (297.212µs) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.264758  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.264799  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.264812  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.264823  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.264832  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.264861  108960 httplog.go:90] GET /healthz: (282.842µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.353703  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.353741  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.353754  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.353763  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.353771  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.353806  108960 httplog.go:90] GET /healthz: (281.737µs) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.364649  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.364688  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.364700  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.364710  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.364718  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.364752  108960 httplog.go:90] GET /healthz: (268.261µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.453411  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.453458  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.453471  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.453480  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.453490  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.453526  108960 httplog.go:90] GET /healthz: (294.672µs) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.464698  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.464742  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.464754  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.464764  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.464774  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.464809  108960 httplog.go:90] GET /healthz: (280.708µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.553438  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.553484  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.553497  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.553507  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.553515  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.553553  108960 httplog.go:90] GET /healthz: (311.262µs) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.564898  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.564939  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.564951  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.564962  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.564970  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.565001  108960 httplog.go:90] GET /healthz: (284.215µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.653462  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.653500  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.653512  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.653523  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.653531  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.653576  108960 httplog.go:90] GET /healthz: (317.221µs) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.664723  108960 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0916 10:38:09.664763  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.664778  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.664788  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.664797  108960 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.664831  108960 httplog.go:90] GET /healthz: (280.154µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.665926  108960 client.go:361] parsed scheme: "endpoint"
I0916 10:38:09.666009  108960 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0916 10:38:09.754822  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.754858  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.754869  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.754877  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.754929  108960 httplog.go:90] GET /healthz: (1.611008ms) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.769682  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.769715  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.769727  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.769736  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.769777  108960 httplog.go:90] GET /healthz: (1.456783ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.854586  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.854626  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.854637  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.854647  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.854698  108960 httplog.go:90] GET /healthz: (1.419743ms) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.866749  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.866781  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.866791  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.866799  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.866845  108960 httplog.go:90] GET /healthz: (1.471346ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.952695  108960 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.432337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0916 10:38:09.952737  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.595124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.955211  108960 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.737866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0916 10:38:09.955281  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.868273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.955644  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.716176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42038]
I0916 10:38:09.955813  108960 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0916 10:38:09.958425  108960 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.264902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:09.958673  108960 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.841914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42042]
I0916 10:38:09.958890  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.799074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0916 10:38:09.959022  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.959037  108960 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0916 10:38:09.959048  108960 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0916 10:38:09.959056  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0916 10:38:09.959085  108960 httplog.go:90] GET /healthz: (2.582754ms) 0 [Go-http-client/1.1 127.0.0.1:42000]
I0916 10:38:09.962457  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.746544ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0916 10:38:09.963061  108960 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.201005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:09.963105  108960 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (3.439697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.963425  108960 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0916 10:38:09.963442  108960 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0916 10:38:09.965512  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:09.965537  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:09.965573  108960 httplog.go:90] GET /healthz: (1.30141ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.965636  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.878684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:09.967528  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.389314ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.969913  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (2.039261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.971708  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.448507ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.974885  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.362031ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.976793  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.370926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.979682  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.110546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.979940  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0916 10:38:09.981464  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.362389ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.983690  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.807607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.984012  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0916 10:38:09.985267  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.0745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.987673  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.833545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.987881  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0916 10:38:09.989092  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.048833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.991644  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.977186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.991927  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0916 10:38:09.993095  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (983.308µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.995389  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.926532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:09.995592  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0916 10:38:09.996802  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.069551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.006134  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.066555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.006664  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0916 10:38:10.008462  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.416263ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.010939  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.01031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.011186  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0916 10:38:10.012921  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.399964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.015267  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.737978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.015647  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0916 10:38:10.016986  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.127107ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.019762  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.205062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.020073  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0916 10:38:10.021621  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.289701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.024530  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.373547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.024785  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0916 10:38:10.026059  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.010188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.028473  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.80511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.028659  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0916 10:38:10.030064  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.209454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.034320  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.623443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.034678  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0916 10:38:10.036065  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.176909ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.038770  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.235715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.041480  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0916 10:38:10.046906  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (5.165991ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.050474  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.535547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.050961  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0916 10:38:10.052653  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.406323ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.054092  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.054123  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.054160  108960 httplog.go:90] GET /healthz: (1.027502ms) 0 [Go-http-client/1.1 127.0.0.1:42044]
I0916 10:38:10.055572  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.425518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.055834  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0916 10:38:10.057186  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.162243ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.059415  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.780417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.060082  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0916 10:38:10.061473  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.136769ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.063887  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.871737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.064131  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0916 10:38:10.065257  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (945.388µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.067698  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.956521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.068068  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0916 10:38:10.069509  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.069535  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.069572  108960 httplog.go:90] GET /healthz: (5.104732ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.070753  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.730409ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.073350  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.909309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.073596  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0916 10:38:10.075064  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.250759ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.077549  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.020972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.077748  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0916 10:38:10.078930  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (984.354µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.081599  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.196967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.082024  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0916 10:38:10.083434  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.18232ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.086262  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.072316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.086908  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0916 10:38:10.088877  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.597584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.091481  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.017873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.091983  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0916 10:38:10.093769  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.623141ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.097202  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.647163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.097659  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0916 10:38:10.099023  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (983.078µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.101445  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.009348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.101735  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0916 10:38:10.103075  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.139926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.105595  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.802865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.105876  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0916 10:38:10.107113  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.036683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.109574  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.042922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.109838  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0916 10:38:10.111793  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.735051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.115395  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.686975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.115663  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0916 10:38:10.117142  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.267609ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.121718  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.128796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.122055  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0916 10:38:10.123382  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.110545ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.127115  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.012266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.127844  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0916 10:38:10.129979  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.813116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.133306  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.670762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.133630  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0916 10:38:10.136033  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (2.118459ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.138811  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.791667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.139101  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0916 10:38:10.140831  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.472397ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.143740  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.187828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.144089  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0916 10:38:10.145916  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.493977ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.148485  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.869874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.148850  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0916 10:38:10.150114  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.02288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.152665  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.067639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.153016  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0916 10:38:10.154683  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.154810  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.154937  108960 httplog.go:90] GET /healthz: (1.831888ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:10.154812  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.477733ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.157628  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.002059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.158006  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0916 10:38:10.160018  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.629152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.163788  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.060507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.164066  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0916 10:38:10.169530  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.169560  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.169605  108960 httplog.go:90] GET /healthz: (4.472587ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.170395  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (6.102389ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.173666  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.590982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.174034  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0916 10:38:10.176633  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (2.316554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.179711  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.280235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.180207  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0916 10:38:10.181587  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.161337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.184685  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.033038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.185175  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0916 10:38:10.186564  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.114967ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.189321  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.251677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.189908  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0916 10:38:10.191199  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (981.786µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.193511  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.732808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.193964  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0916 10:38:10.195699  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.215189ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.198569  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.383345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.198849  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0916 10:38:10.200210  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.042664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.203058  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.371316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.203316  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0916 10:38:10.204909  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.199712ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.207623  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.052137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.207959  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0916 10:38:10.209573  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.267144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.211967  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.870245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.212451  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0916 10:38:10.213949  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.199006ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.216551  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.945629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.216846  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0916 10:38:10.218269  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.090588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.220991  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.176816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.221263  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0916 10:38:10.223884  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.183947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.226287  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.74028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.226676  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0916 10:38:10.227950  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (982.022µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.230143  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.60358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.230384  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0916 10:38:10.231509  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (921.938µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.233841  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.795239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.234002  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0916 10:38:10.235192  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.03292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.237690  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.915472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.238111  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0916 10:38:10.239488  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.10987ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.253500  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.270151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.253900  108960 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0916 10:38:10.254769  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.254968  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.255237  108960 httplog.go:90] GET /healthz: (1.750921ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:10.267144  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.267740  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.268625  108960 httplog.go:90] GET /healthz: (3.851956ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.272603  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.415807ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.294263  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.953203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.295161  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0916 10:38:10.315552  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.307836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.334059  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.654206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.334656  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0916 10:38:10.352841  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.632087ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.354040  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.354219  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.354409  108960 httplog.go:90] GET /healthz: (1.179865ms) 0 [Go-http-client/1.1 127.0.0.1:42044]
I0916 10:38:10.366569  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.366605  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.366654  108960 httplog.go:90] GET /healthz: (1.832874ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.376638  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.897007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.377230  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0916 10:38:10.393144  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.587057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.414202  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.98822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.414727  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
E0916 10:38:10.432146  108960 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:34505/apis/events.k8s.io/v1beta1/namespaces/permit-pluginaa3c2dbb-3ef5-4b53-a3b1-eab66617ffff/events: dial tcp 127.0.0.1:34505: connect: connection refused' (may retry after sleeping)
I0916 10:38:10.432996  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.802835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.454966  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.732934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.455265  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0916 10:38:10.456514  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.456543  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.456588  108960 httplog.go:90] GET /healthz: (3.411055ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:10.465806  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.465845  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.465894  108960 httplog.go:90] GET /healthz: (1.378022ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.472685  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.407149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.493678  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.399918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.493976  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0916 10:38:10.512685  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.485448ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.533763  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.537278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.534072  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0916 10:38:10.552794  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.535292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.554836  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.554865  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.554899  108960 httplog.go:90] GET /healthz: (1.096956ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:10.565737  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.565770  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.565814  108960 httplog.go:90] GET /healthz: (1.309764ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.573741  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.536372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.574154  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0916 10:38:10.592980  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.635848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.613761  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.481883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.614062  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0916 10:38:10.632774  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.508956ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.654015  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.76145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.654270  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0916 10:38:10.655020  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.655056  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.655096  108960 httplog.go:90] GET /healthz: (1.291781ms) 0 [Go-http-client/1.1 127.0.0.1:42044]
I0916 10:38:10.666097  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.666133  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.666180  108960 httplog.go:90] GET /healthz: (1.39085ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.672841  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.570186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.694018  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.754365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.694282  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0916 10:38:10.712889  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.645796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.733489  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.259575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.734023  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0916 10:38:10.752894  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.510135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:10.754033  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.754063  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.754099  108960 httplog.go:90] GET /healthz: (971.082µs) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:10.765814  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.765850  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.765898  108960 httplog.go:90] GET /healthz: (1.404174ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.773945  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.704654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.774242  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0916 10:38:10.792991  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.736558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.819072  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.539402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.819373  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0916 10:38:10.832712  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.463243ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.853667  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.29683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.853926  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0916 10:38:10.855096  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.855123  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.855153  108960 httplog.go:90] GET /healthz: (1.090889ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:10.865934  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.865971  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.866020  108960 httplog.go:90] GET /healthz: (1.477083ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.872699  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.414348ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.893732  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.466032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.894302  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0916 10:38:10.912796  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.5249ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.933660  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.266533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.933941  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0916 10:38:10.952769  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.514879ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.954614  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.954644  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.954691  108960 httplog.go:90] GET /healthz: (1.397001ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:10.966166  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:10.966201  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:10.966250  108960 httplog.go:90] GET /healthz: (1.722809ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.973798  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.568254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:10.974061  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0916 10:38:10.992698  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.444164ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.013746  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.484253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.014099  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0916 10:38:11.032750  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.473361ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.054026  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.054071  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.054108  108960 httplog.go:90] GET /healthz: (983.933µs) 0 [Go-http-client/1.1 127.0.0.1:42044]
I0916 10:38:11.054173  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.907513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.054461  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0916 10:38:11.065623  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.065658  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.065707  108960 httplog.go:90] GET /healthz: (1.187921ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.073159  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.673058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.093871  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.559505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.094220  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0916 10:38:11.112767  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.513303ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.133776  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.467024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.134179  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0916 10:38:11.153097  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.683421ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.154646  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.154684  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.154732  108960 httplog.go:90] GET /healthz: (1.396367ms) 0 [Go-http-client/1.1 127.0.0.1:42044]
I0916 10:38:11.166511  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.166540  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.166597  108960 httplog.go:90] GET /healthz: (1.312381ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.173468  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.233031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.173756  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0916 10:38:11.192858  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.631625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.213742  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.46183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.214492  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0916 10:38:11.232666  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.470579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.254477  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.177782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.254759  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0916 10:38:11.256079  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.256105  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.256147  108960 httplog.go:90] GET /healthz: (2.448042ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:11.268064  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.268092  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.268164  108960 httplog.go:90] GET /healthz: (3.279655ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.272574  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.366865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.293963  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.455613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.294224  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0916 10:38:11.313169  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.326558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.333737  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.491308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.334232  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0916 10:38:11.353078  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.887259ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.357673  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.357702  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.357747  108960 httplog.go:90] GET /healthz: (3.742857ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:11.365676  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.365705  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.365758  108960 httplog.go:90] GET /healthz: (1.306169ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.373408  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.197906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.373833  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0916 10:38:11.392436  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.211278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.413545  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.257979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.413806  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0916 10:38:11.432565  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.308278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.454502  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.239149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.454813  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0916 10:38:11.455961  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.455991  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.456027  108960 httplog.go:90] GET /healthz: (2.857414ms) 0 [Go-http-client/1.1 127.0.0.1:42044]
I0916 10:38:11.465936  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.465976  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.466022  108960 httplog.go:90] GET /healthz: (1.537208ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.473002  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.749588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.493762  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.514779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.494087  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0916 10:38:11.512870  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.641922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.534532  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.255827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.534825  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0916 10:38:11.553047  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.725681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.554409  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.554438  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.554494  108960 httplog.go:90] GET /healthz: (1.291927ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:11.565737  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.565774  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.565817  108960 httplog.go:90] GET /healthz: (1.315639ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.574984  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.719349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.575996  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0916 10:38:11.592795  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.548402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.614775  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.232587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.615880  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0916 10:38:11.633246  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.834805ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.654541  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.314435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:11.654836  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0916 10:38:11.656379  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.656413  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.656462  108960 httplog.go:90] GET /healthz: (2.929923ms) 0 [Go-http-client/1.1 127.0.0.1:42044]
I0916 10:38:11.665998  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.666034  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.666082  108960 httplog.go:90] GET /healthz: (1.558121ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.673000  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.802668ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.694002  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.783584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.694340  108960 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0916 10:38:11.713200  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.354634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.715618  108960 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.824568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.734084  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.790427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.734651  108960 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0916 10:38:11.752769  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.515347ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.754501  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.754529  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.754564  108960 httplog.go:90] GET /healthz: (1.408821ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:11.755077  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.222563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.765860  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.765898  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.765945  108960 httplog.go:90] GET /healthz: (1.393053ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.773903  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.620388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.774186  108960 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0916 10:38:11.793682  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.459009ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.796319  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.111002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.814426  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.099546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.814893  108960 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0916 10:38:11.838228  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.790111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.843130  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.488825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.853524  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.292895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.853821  108960 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0916 10:38:11.855789  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.855815  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.855858  108960 httplog.go:90] GET /healthz: (1.552415ms) 0 [Go-http-client/1.1 127.0.0.1:42044]
I0916 10:38:11.865349  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.865386  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.865435  108960 httplog.go:90] GET /healthz: (973.635µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.872601  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.171432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.874644  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.587066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.894834  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.246509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.895141  108960 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0916 10:38:11.914150  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (2.841036ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.916394  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.756112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.934110  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.798851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.934556  108960 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0916 10:38:11.952928  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.655618ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.954973  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.955008  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.955046  108960 httplog.go:90] GET /healthz: (1.375309ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:11.955296  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.825011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.966013  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:11.966054  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:11.966103  108960 httplog.go:90] GET /healthz: (1.575857ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.973846  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.615541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.974169  108960 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0916 10:38:11.993122  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.838906ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:11.995683  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.009337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.032125  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (9.795762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.032427  108960 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0916 10:38:12.034057  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.349025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.035883  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.385801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.053881  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.577348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.054278  108960 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0916 10:38:12.054972  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:12.054996  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:12.055041  108960 httplog.go:90] GET /healthz: (1.369387ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:12.065834  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:12.065869  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:12.065911  108960 httplog.go:90] GET /healthz: (1.414074ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.072944  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.76438ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.075676  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.226907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.094876  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.504026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.095152  108960 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0916 10:38:12.112890  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.594443ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.115613  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.236035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.136181  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.868857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.136532  108960 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0916 10:38:12.152755  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.494103ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.156005  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.705951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.157748  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:12.157776  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:12.157814  108960 httplog.go:90] GET /healthz: (1.652076ms) 0 [Go-http-client/1.1 127.0.0.1:42044]
I0916 10:38:12.165932  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:12.165980  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:12.166026  108960 httplog.go:90] GET /healthz: (1.533691ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.174109  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.919437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.174490  108960 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0916 10:38:12.192777  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.480174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.195191  108960 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.950187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.213997  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.776279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.214521  108960 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0916 10:38:12.232827  108960 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.575376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.236757  108960 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.083751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.254591  108960 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0916 10:38:12.254624  108960 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0916 10:38:12.254667  108960 httplog.go:90] GET /healthz: (1.069663ms) 0 [Go-http-client/1.1 127.0.0.1:42040]
I0916 10:38:12.254736  108960 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.474472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.255025  108960 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0916 10:38:12.266111  108960 httplog.go:90] GET /healthz: (1.521022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.268446  108960 httplog.go:90] GET /api/v1/namespaces/default: (1.883444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.271153  108960 httplog.go:90] POST /api/v1/namespaces: (2.076334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.273751  108960 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.375769ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.278805  108960 httplog.go:90] POST /api/v1/namespaces/default/services: (4.307131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.280540  108960 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.274953ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.283242  108960 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.106254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.354669  108960 httplog.go:90] GET /healthz: (1.363762ms) 200 [Go-http-client/1.1 127.0.0.1:42044]
W0916 10:38:12.355779  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.355875  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.355891  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.355983  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.356000  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.356067  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.356086  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.356131  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.356144  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.356159  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0916 10:38:12.356253  108960 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0916 10:38:12.356291  108960 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0916 10:38:12.356367  108960 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0916 10:38:12.356776  108960 shared_informer.go:197] Waiting for caches to sync for scheduler
I0916 10:38:12.357510  108960 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0916 10:38:12.357558  108960 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0916 10:38:12.358834  108960 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (771.958µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:12.360194  108960 get.go:251] Starting watch for /api/v1/pods, rv=30440 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m8s
I0916 10:38:12.457212  108960 shared_informer.go:227] caches populated
I0916 10:38:12.457252  108960 shared_informer.go:204] Caches are synced for scheduler 
I0916 10:38:12.457692  108960 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.457722  108960 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.458160  108960 reflector.go:120] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.458175  108960 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.458650  108960 reflector.go:120] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.458667  108960 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.459031  108960 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.459045  108960 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.459441  108960 reflector.go:120] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.459456  108960 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.460358  108960 reflector.go:120] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.460467  108960 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.460558  108960 reflector.go:120] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.460570  108960 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.461203  108960 reflector.go:120] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.461219  108960 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.461589  108960 reflector.go:120] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.461608  108960 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.469780  108960 reflector.go:120] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.469820  108960 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0916 10:38:12.472066  108960 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (672.288µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:12.472066  108960 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (524.858µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42356]
I0916 10:38:12.472614  108960 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (365.777µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42350]
I0916 10:38:12.472627  108960 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (480.785µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42340]
I0916 10:38:12.473017  108960 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (312.011µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0916 10:38:12.473079  108960 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (470.504µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42352]
I0916 10:38:12.473151  108960 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (394.732µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42348]
I0916 10:38:12.473452  108960 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (338.097µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42346]
I0916 10:38:12.473613  108960 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (343.737µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42344]
I0916 10:38:12.473776  108960 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (585.479µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42354]
I0916 10:38:12.474137  108960 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30440 labels= fields= timeout=9m20s
I0916 10:38:12.474537  108960 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30440 labels= fields= timeout=7m47s
I0916 10:38:12.474653  108960 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30440 labels= fields= timeout=6m45s
I0916 10:38:12.474666  108960 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30440 labels= fields= timeout=9m51s
I0916 10:38:12.474893  108960 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30440 labels= fields= timeout=6m27s
I0916 10:38:12.475249  108960 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30440 labels= fields= timeout=6m54s
I0916 10:38:12.475580  108960 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30440 labels= fields= timeout=5m10s
I0916 10:38:12.475649  108960 get.go:251] Starting watch for /api/v1/nodes, rv=30440 labels= fields= timeout=5m58s
I0916 10:38:12.475675  108960 get.go:251] Starting watch for /api/v1/services, rv=30626 labels= fields= timeout=7m38s
I0916 10:38:12.477105  108960 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30440 labels= fields= timeout=9m56s
E0916 10:38:12.522825  108960 factory.go:590] Error getting pod permit-pluginaa3c2dbb-3ef5-4b53-a3b1-eab66617ffff/test-pod for retry: Get http://127.0.0.1:34505/api/v1/namespaces/permit-pluginaa3c2dbb-3ef5-4b53-a3b1-eab66617ffff/pods/test-pod: dial tcp 127.0.0.1:34505: connect: connection refused; retrying...
I0916 10:38:12.557582  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557630  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557637  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557644  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557650  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557657  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557663  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557669  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557675  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557690  108960 shared_informer.go:227] caches populated
I0916 10:38:12.557701  108960 shared_informer.go:227] caches populated
I0916 10:38:12.561776  108960 httplog.go:90] POST /api/v1/nodes: (3.385001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:12.561779  108960 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0916 10:38:12.567812  108960 httplog.go:90] PUT /api/v1/nodes/testnode/status: (5.499936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:12.571223  108960 httplog.go:90] POST /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods: (2.80494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:12.571433  108960 scheduling_queue.go:830] About to try and schedule pod node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pidpressure-fake-name
I0916 10:38:12.571453  108960 scheduler.go:530] Attempting to schedule pod: node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pidpressure-fake-name
I0916 10:38:12.571602  108960 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pidpressure-fake-name", node "testnode"
I0916 10:38:12.571618  108960 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0916 10:38:12.571675  108960 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0916 10:38:12.576729  108960 httplog.go:90] POST /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name/binding: (4.760589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:12.577034  108960 scheduler.go:662] pod node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0916 10:38:12.579657  108960 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/events: (2.249306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:12.673965  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.963756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:12.776600  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (4.665738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:12.874274  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.822156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:12.973963  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.986565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.074908  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.955828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.174051  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.999629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.274066  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.070955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.374230  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.167638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.474085  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:13.474528  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:13.474868  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.075582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.474900  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:13.475279  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:13.475444  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:13.476557  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:13.573828  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.812639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.674007  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.014221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.773703  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.743677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.874019  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.002734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:13.973986  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.965372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.073921  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.956729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.173942  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.972648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.274410  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.165154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.373857  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.874681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.473950  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.892199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.474321  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:14.474731  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:14.475037  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:14.475383  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:14.475574  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:14.476723  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:14.573861  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.839733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.674249  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.228732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.774213  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.179134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.874107  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.100015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:14.974093  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.093698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.073812  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.789571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.173850  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.844835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.274032  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.955658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.374057  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.998641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.474393  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.390693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.474772  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:15.474890  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:15.475178  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:15.475535  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:15.475706  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:15.476809  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:15.574744  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.760635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.674161  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.035856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.776995  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (4.972049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.874367  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.289099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:15.974830  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.731752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.074046  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.001045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.174105  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.046646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.274089  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.074903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.374418  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.280136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.473721  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.738771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.474925  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:16.475089  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:16.475316  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:16.475703  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:16.475883  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:16.476926  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:16.574010  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.973622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.673956  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.952014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.773907  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.836074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.873694  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.659359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:16.973612  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.641016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.073919  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.891458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.173963  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.9279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.274305  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.283006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.374040  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.033751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.473561  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.631571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.475041  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:17.475235  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:17.475537  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:17.475909  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:17.476015  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:17.477095  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:17.574041  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.034792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.673946  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.943894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.773882  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.832016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.873983  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.921488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:17.973877  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.901503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.073893  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.868205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.173918  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.828166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.273667  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.668013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.373896  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.863089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.473958  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.950157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.475146  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:18.475421  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:18.475716  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:18.476061  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:18.476154  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:18.477743  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:18.574273  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.036346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.674198  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.166392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.774044  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.035655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.873699  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.702023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:18.973947  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.912667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.074212  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.242163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.174140  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.931441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.274036  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.041656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.376020  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (3.93814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.474164  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.079463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.475256  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:19.475562  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:19.475863  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:19.476228  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:19.476371  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:19.477898  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:19.573812  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.812758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.674015  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.014958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.774037  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.014872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.873969  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.923432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:19.975842  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.967419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.073982  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.003265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.173921  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.937035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.276417  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (4.407624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.377050  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (3.634993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.474385  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.349608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.475420  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:20.475720  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:20.476023  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:20.476387  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:20.476559  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:20.478055  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:20.575556  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (3.463429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.673901  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.874934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.774453  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.386381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.873810  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.774452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:20.973920  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.891408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:21.074285  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.267984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
E0916 10:38:21.168306  108960 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:34505/apis/events.k8s.io/v1beta1/namespaces/permit-pluginaa3c2dbb-3ef5-4b53-a3b1-eab66617ffff/events: dial tcp 127.0.0.1:34505: connect: connection refused' (may retry after sleeping)
I0916 10:38:21.173968  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.889956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:21.274413  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.355053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:21.373894  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.925134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:21.473850  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.861449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:21.475607  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:21.475866  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:21.476197  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:21.476552  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:21.476781  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:21.478147  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:21.574024  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.017554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:21.674424  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.336026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:21.773792  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.68935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:21.878812  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.074342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:21.974354  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.103501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:22.074301  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.237644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:22.173997  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.985897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:22.268942  108960 httplog.go:90] GET /api/v1/namespaces/default: (2.06333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:22.271107  108960 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.621723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:22.273174  108960 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.562113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:22.274068  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.340732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:22.374307  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.247633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:22.475507  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (3.509893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:22.476104  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:22.476112  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:22.476386  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:22.476695  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:22.476988  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:22.478291  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:22.573914  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.870642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:22.674195  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.127815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:22.774084  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.972709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:22.874119  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.055382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:22.974143  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.980802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.074218  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.15045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.178488  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.929013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.274269  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.060412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.374017  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.947706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.474055  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.047835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.476267  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:23.476302  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:23.476619  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:23.476871  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:23.477128  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:23.478458  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:23.574181  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.147167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.674060  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.064549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.776137  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (4.067291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.873679  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.71824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:23.973931  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.921998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.077704  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (5.145906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.173866  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.778374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.274099  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.040235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.374995  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.993414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.474078  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.015478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.476462  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:24.476462  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:24.476730  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:24.477021  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:24.477280  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:24.478606  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:24.573849  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.864337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.673997  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.02181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.773806  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.808431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.873747  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.725056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:24.973797  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.63607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.073824  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.819901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.173933  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.92393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.273821  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.771479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.373802  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.780123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.473897  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.951227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.476775  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:25.476790  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:25.476904  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:25.477249  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:25.477387  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:25.478863  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:25.573658  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.677841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.674105  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.057193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.774049  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.043571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.873977  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.048816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:25.973800  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.786972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.073687  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.688051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.173664  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.749704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.273766  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.738681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.373998  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.990186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.473888  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.922241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.476945  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:26.476974  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:26.477028  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:26.477860  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:26.477878  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:26.479029  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:26.573834  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.861683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.673906  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.91282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.773922  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.888351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.874007  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.974263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:26.973873  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.883862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.073931  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.922151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.173869  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.818416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.273958  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.893385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.374087  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.041713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.473997  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.970903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.478053  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:27.479198  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:27.479198  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:27.479225  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:27.479280  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:27.479283  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:27.575170  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (3.091366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.673698  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.668684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.773992  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.858386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.875213  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.789872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:27.974034  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.979855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.074093  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.030311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.173942  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.929934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.274202  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.062824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.374259  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.234796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.474611  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.60225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.478247  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:28.479418  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:28.479529  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:28.479438  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:28.479456  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:28.479502  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:28.612569  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.973427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.676115  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.331523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.774203  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.174614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.873887  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.893591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:28.973840  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.857462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.074053  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.044133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.173968  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.914312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.274163  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.119603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.373949  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.92391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.474198  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.071192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.478469  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:29.479664  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:29.479670  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:29.479750  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:29.479679  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:29.479778  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:29.573535  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.543914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.674077  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.098132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.773721  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.731147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.873824  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.680353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:29.974106  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.982757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.074095  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.121774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.174767  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.634764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.274068  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.022037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.374010  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.990207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.474647  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.214063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.478689  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:30.479810  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:30.479825  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:30.479903  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:30.479919  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:30.479935  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:30.575764  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.417015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.674099  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.027297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.775707  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.966854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.873871  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.782465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:30.973700  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.737578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.073819  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.823985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.173897  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.928455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.274064  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.087784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.373771  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.774653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.473832  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.839493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.478882  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:31.479976  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:31.480016  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:31.480063  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:31.480091  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:31.480174  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
E0916 10:38:31.539787  108960 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:34505/apis/events.k8s.io/v1beta1/namespaces/permit-pluginaa3c2dbb-3ef5-4b53-a3b1-eab66617ffff/events: dial tcp 127.0.0.1:34505: connect: connection refused' (may retry after sleeping)
I0916 10:38:31.573843  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.881752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.673970  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.934537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.774175  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.070259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.874025  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.056698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:31.973696  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.721845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:32.075511  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (3.519169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:32.174170  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.161706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:32.268555  108960 httplog.go:90] GET /api/v1/namespaces/default: (1.533556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:32.270360  108960 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.327822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:32.272097  108960 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.393442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:32.274076  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.661023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:32.373941  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.891698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:32.473738  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.693324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:32.479078  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:32.480088  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:32.480170  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:32.480217  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:32.480293  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:32.480356  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:32.573666  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.640097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:32.673775  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.814182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:32.774189  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.852525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:32.874117  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.859052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:32.973697  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.678897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.074188  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.077368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.174079  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.09381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.273735  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.699816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.373794  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.737439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.474579  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.35739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.479282  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:33.480220  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:33.480394  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:33.480408  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:33.480480  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:33.480494  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:33.573834  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.868942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.674052  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.024511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.773753  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.734317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.874954  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.180333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:33.973770  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.718166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.073925  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.87361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.173923  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.682694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.273840  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.749428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.374813  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.040897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.477223  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (5.252501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.479512  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:34.480394  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:34.480466  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:34.480602  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:34.480628  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:34.480842  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:34.573892  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.849185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.676012  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.093007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.775580  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (3.560442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.874055  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.029879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:34.974308  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.240545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.073904  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.89339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.174025  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.87028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.273847  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.810826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.374011  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.951001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.473552  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.59894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.479701  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:35.480469  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:35.480700  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:35.480712  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:35.480726  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:35.480985  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:35.573805  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.860147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.674023  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.914597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.773852  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.813937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.875755  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (3.682285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:35.973995  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.016134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.073855  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.849013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.173957  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.937776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.273886  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.903494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.373953  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.719628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.474514  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.429353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.479907  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:36.480631  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:36.480847  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:36.480873  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:36.480887  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:36.481134  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:36.573769  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.811648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.673867  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.906261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.773475  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.499978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.873998  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.98385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:36.979856  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (7.889506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.074052  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.506312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.174015  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.600904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.278683  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (5.377312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.376275  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (4.0121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.474015  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.029859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.480133  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:37.480821  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:37.481006  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:37.481032  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:37.481048  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:37.481298  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:37.575049  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.93964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.673817  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.824771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.773963  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.901043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.873613  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.591082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:37.974279  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.253251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:38.073707  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.692403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
E0916 10:38:38.123451  108960 factory.go:590] Error getting pod permit-pluginaa3c2dbb-3ef5-4b53-a3b1-eab66617ffff/test-pod for retry: Get http://127.0.0.1:34505/api/v1/namespaces/permit-pluginaa3c2dbb-3ef5-4b53-a3b1-eab66617ffff/pods/test-pod: dial tcp 127.0.0.1:34505: connect: connection refused; retrying...
I0916 10:38:38.176096  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (4.086269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:38.274209  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.175734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:38.373701  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.69131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:38.473847  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.859544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:38.480391  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:38.481139  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:38.481184  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:38.481197  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:38.481212  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:38.481527  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:38.577980  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (5.964765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:38.674169  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.160388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:38.773745  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.721642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:38.874157  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.619149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:38.974589  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.782507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.074041  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.015567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.173789  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.721387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.274110  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.116245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.377894  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (5.889381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.474248  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.20324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.480577  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:39.481359  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:39.481400  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:39.481421  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:39.481434  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:39.481703  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:39.573714  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.660352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.673508  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.46447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.773521  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.500297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.873751  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.67285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:39.973818  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.798592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.074154  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.827518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.174848  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.865099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.273779  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.518202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.373712  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.667172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.481457  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:40.481797  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:40.481871  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:40.481886  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:40.481898  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:40.481913  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:40.482514  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (7.767014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.573775  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.582897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.673584  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.599343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.776954  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (4.708854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.873731  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.662907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:40.974555  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.223426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.073871  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.817682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.173614  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.595159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.274171  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.782202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.373321  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.407257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.473318  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.321352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.481633  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:41.481959  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:41.482012  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:41.482065  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:41.482084  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:41.482111  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:41.574372  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.060004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.673930  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.842032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.773713  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.6978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.875979  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (3.942135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:41.974115  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (2.19087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.073687  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.597273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.173733  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.738926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.268843  108960 httplog.go:90] GET /api/v1/namespaces/default: (1.650851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.271128  108960 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.607736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.273307  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.453941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42798]
I0916 10:38:42.273632  108960 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.104565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.373727  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.790503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.473825  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.760038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.482138  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:42.482268  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:42.482268  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:42.482272  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:42.482291  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:42.482321  108960 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0916 10:38:42.573838  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.81061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.576060  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.560854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.583846  108960 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (7.231931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.587021  108960 httplog.go:90] GET /api/v1/namespaces/node-pid-pressureb1b433fc-1a86-4e5a-8509-b6b5ff589fb4/pods/pidpressure-fake-name: (1.51973ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
E0916 10:38:42.587793  108960 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0916 10:38:42.588080  108960 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30440&timeout=9m20s&timeoutSeconds=560&watch=true: (30.114202307s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42352]
I0916 10:38:42.588096  108960 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30626&timeout=7m38s&timeoutSeconds=458&watch=true: (30.112630395s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42346]
I0916 10:38:42.588108  108960 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30440&timeout=9m56s&timeoutSeconds=596&watch=true: (30.111291264s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42358]
I0916 10:38:42.588182  108960 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30440&timeout=6m45s&timeoutSeconds=405&watch=true: (30.113754324s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42350]
I0916 10:38:42.588113  108960 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30440&timeout=9m51s&timeoutSeconds=591&watch=true: (30.113651137s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42340]
I0916 10:38:42.588227  108960 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30440&timeout=7m47s&timeoutSeconds=467&watch=true: (30.113916425s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42356]
I0916 10:38:42.588261  108960 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30440&timeout=6m27s&timeoutSeconds=387&watch=true: (30.113589852s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0916 10:38:42.588306  108960 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30440&timeout=6m54s&timeoutSeconds=414&watch=true: (30.113246071s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42344]
I0916 10:38:42.588312  108960 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30440&timeout=5m58s&timeoutSeconds=358&watch=true: (30.11288693s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42348]
I0916 10:38:42.588358  108960 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30440&timeoutSeconds=428&watch=true: (30.228634452s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0916 10:38:42.588404  108960 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30440&timeout=5m10s&timeoutSeconds=310&watch=true: (30.113027453s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0916 10:38:42.592067  108960 httplog.go:90] DELETE /api/v1/nodes: (3.75401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.592239  108960 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0916 10:38:42.593736  108960 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.285081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
I0916 10:38:42.595984  108960 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.857995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42368]
--- FAIL: TestNodePIDPressure (33.93s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190916-103020.xml

Find permit-pluginaa3c2dbb-3ef5-4b53-a3b1-eab66617ffff/test-pod mentions in log files | View test history on testgrid


Show 2862 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 915 lines ...
W0916 10:25:07.084] I0916 10:25:07.084505   52896 controllermanager.go:534] Started "csrapproving"
W0916 10:25:07.085] I0916 10:25:07.084625   52896 certificate_controller.go:118] Starting certificate controller "csrapproving"
W0916 10:25:07.085] I0916 10:25:07.084705   52896 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving
W0916 10:25:07.085] I0916 10:25:07.084320   52896 shared_informer.go:197] Waiting for caches to sync for endpoint
W0916 10:25:07.085] I0916 10:25:07.084998   52896 controllermanager.go:534] Started "csrcleaner"
W0916 10:25:07.085] I0916 10:25:07.085313   52896 cleaner.go:81] Starting CSR cleaner controller
W0916 10:25:07.086] E0916 10:25:07.086150   52896 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0916 10:25:07.086] W0916 10:25:07.086185   52896 controllermanager.go:526] Skipping "service"
W0916 10:25:07.087] I0916 10:25:07.086991   52896 node_lifecycle_controller.go:77] Sending events to api server
W0916 10:25:07.087] E0916 10:25:07.087033   52896 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0916 10:25:07.087] W0916 10:25:07.087048   52896 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0916 10:25:07.088] I0916 10:25:07.087465   52896 controllermanager.go:534] Started "podgc"
W0916 10:25:07.088] I0916 10:25:07.087644   52896 gc_controller.go:75] Starting GC controller
W0916 10:25:07.088] I0916 10:25:07.087681   52896 shared_informer.go:197] Waiting for caches to sync for GC
W0916 10:25:07.096] I0916 10:25:07.095762   52896 controllermanager.go:534] Started "namespace"
W0916 10:25:07.096] I0916 10:25:07.095873   52896 namespace_controller.go:186] Starting namespace controller
... skipping 109 lines ...
W0916 10:25:08.007] I0916 10:25:07.798587   52896 shared_informer.go:204] Caches are synced for service account 
W0916 10:25:08.007] I0916 10:25:07.799423   52896 shared_informer.go:204] Caches are synced for job 
W0916 10:25:08.007] I0916 10:25:07.800934   52896 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0916 10:25:08.007] I0916 10:25:07.801361   49368 controller.go:606] quota admission added evaluator for: serviceaccounts
W0916 10:25:08.007] I0916 10:25:07.802579   52896 shared_informer.go:204] Caches are synced for deployment 
W0916 10:25:08.008] I0916 10:25:07.807582   52896 shared_informer.go:204] Caches are synced for PVC protection 
W0916 10:25:08.008] E0916 10:25:07.823245   52896 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0916 10:25:08.008] I0916 10:25:07.830971   52896 shared_informer.go:204] Caches are synced for ReplicaSet 
W0916 10:25:08.008] W0916 10:25:07.875611   52896 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0916 10:25:08.009] I0916 10:25:07.878653   52896 shared_informer.go:204] Caches are synced for TTL 
W0916 10:25:08.009] I0916 10:25:07.883035   52896 shared_informer.go:204] Caches are synced for taint 
W0916 10:25:08.009] I0916 10:25:07.883364   52896 node_lifecycle_controller.go:1253] Initializing eviction metric for zone: 
W0916 10:25:08.009] I0916 10:25:07.885092   52896 node_lifecycle_controller.go:1103] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0916 10:25:08.009] I0916 10:25:07.883662   52896 taint_manager.go:186] Starting NoExecuteTaintManager
W0916 10:25:08.010] I0916 10:25:07.884050   52896 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"fcf68065-3756-422d-ba49-b1c8cd9a1130", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
... skipping 69 lines ...
I0916 10:25:11.379] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:25:11.381] +++ command: run_RESTMapper_evaluation_tests
I0916 10:25:11.392] +++ [0916 10:25:11] Creating namespace namespace-1568629511-26498
I0916 10:25:11.470] namespace/namespace-1568629511-26498 created
I0916 10:25:11.542] Context "test" modified.
I0916 10:25:11.549] +++ [0916 10:25:11] Testing RESTMapper
I0916 10:25:11.652] +++ [0916 10:25:11] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0916 10:25:11.667] +++ exit code: 0
I0916 10:25:11.787] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0916 10:25:11.788] bindings                                                                      true         Binding
I0916 10:25:11.788] componentstatuses                 cs                                          false        ComponentStatus
I0916 10:25:11.788] configmaps                        cm                                          true         ConfigMap
I0916 10:25:11.789] endpoints                         ep                                          true         Endpoints
... skipping 616 lines ...
I0916 10:25:32.101] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0916 10:25:32.201] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0916 10:25:32.276] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0916 10:25:32.373] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0916 10:25:32.537] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:25:32.734] (Bpod/env-test-pod created
W0916 10:25:32.835] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0916 10:25:32.835] error: setting 'all' parameter but found a non empty selector. 
W0916 10:25:32.836] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 10:25:32.836] I0916 10:25:31.742093   49368 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0916 10:25:32.836] error: min-available and max-unavailable cannot be both specified
I0916 10:25:32.939] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0916 10:25:32.940] Name:         env-test-pod
I0916 10:25:32.940] Namespace:    test-kubectl-describe-pod
I0916 10:25:32.940] Priority:     0
I0916 10:25:32.941] Node:         <none>
I0916 10:25:32.941] Labels:       <none>
... skipping 174 lines ...
I0916 10:25:46.775] (Bpod/valid-pod patched
I0916 10:25:46.876] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0916 10:25:46.957] (Bpod/valid-pod patched
I0916 10:25:47.060] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0916 10:25:47.232] (Bpod/valid-pod patched
I0916 10:25:47.336] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0916 10:25:47.525] (B+++ [0916 10:25:47] "kubectl patch with resourceVersion 500" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0916 10:25:47.791] pod "valid-pod" deleted
I0916 10:25:47.804] pod/valid-pod replaced
I0916 10:25:47.911] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0916 10:25:48.081] (BSuccessful
I0916 10:25:48.082] message:error: --grace-period must have --force specified
I0916 10:25:48.082] has:\-\-grace-period must have \-\-force specified
I0916 10:25:48.253] Successful
I0916 10:25:48.254] message:error: --timeout must have --force specified
I0916 10:25:48.254] has:\-\-timeout must have \-\-force specified
I0916 10:25:48.427] node/node-v1-test created
W0916 10:25:48.528] W0916 10:25:48.426764   52896 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0916 10:25:48.628] node/node-v1-test replaced
I0916 10:25:48.732] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0916 10:25:48.828] (Bnode "node-v1-test" deleted
I0916 10:25:48.947] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0916 10:25:49.272] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0916 10:25:50.316] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 35 lines ...
I0916 10:25:51.742] (Bpod/redis-master created
I0916 10:25:51.746] pod/valid-pod created
W0916 10:25:51.847] Edit cancelled, no changes made.
W0916 10:25:51.847] Edit cancelled, no changes made.
W0916 10:25:51.847] Edit cancelled, no changes made.
W0916 10:25:51.847] Edit cancelled, no changes made.
W0916 10:25:51.847] error: 'name' already has a value (valid-pod), and --overwrite is false
W0916 10:25:51.847] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 10:25:51.948] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0916 10:25:51.948] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0916 10:25:52.028] (Bpod "redis-master" deleted
I0916 10:25:52.033] pod "valid-pod" deleted
I0916 10:25:52.140] core.sh:622: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 72 lines ...
I0916 10:25:58.573] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0916 10:25:58.578] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:25:58.581] +++ command: run_kubectl_create_error_tests
I0916 10:25:58.594] +++ [0916 10:25:58] Creating namespace namespace-1568629558-3659
I0916 10:25:58.676] namespace/namespace-1568629558-3659 created
I0916 10:25:58.763] Context "test" modified.
I0916 10:25:58.770] +++ [0916 10:25:58] Testing kubectl create with error
W0916 10:25:58.871] Error: must specify one of -f and -k
W0916 10:25:58.871] 
W0916 10:25:58.871] Create a resource from a file or from stdin.
W0916 10:25:58.871] 
W0916 10:25:58.871]  JSON and YAML formats are accepted.
W0916 10:25:58.871] 
W0916 10:25:58.872] Examples:
... skipping 41 lines ...
W0916 10:25:58.880] 
W0916 10:25:58.880] Usage:
W0916 10:25:58.880]   kubectl create -f FILENAME [options]
W0916 10:25:58.880] 
W0916 10:25:58.880] Use "kubectl <command> --help" for more information about a given command.
W0916 10:25:58.881] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0916 10:25:59.032] +++ [0916 10:25:59] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0916 10:25:59.133] kubectl convert is DEPRECATED and will be removed in a future version.
W0916 10:25:59.134] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0916 10:25:59.244] +++ exit code: 0
I0916 10:25:59.279] Recording: run_kubectl_apply_tests
I0916 10:25:59.280] Running command: run_kubectl_apply_tests
I0916 10:25:59.304] 
... skipping 16 lines ...
I0916 10:26:00.962] apply.sh:276: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0916 10:26:01.050] (Bpod "test-pod" deleted
I0916 10:26:01.274] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0916 10:26:01.573] I0916 10:26:01.573187   49368 client.go:361] parsed scheme: "endpoint"
W0916 10:26:01.574] I0916 10:26:01.573260   49368 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0916 10:26:01.577] I0916 10:26:01.577350   49368 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0916 10:26:01.676] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0916 10:26:01.777] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0916 10:26:01.778] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0916 10:26:01.792] +++ exit code: 0
I0916 10:26:01.831] Recording: run_kubectl_run_tests
I0916 10:26:01.831] Running command: run_kubectl_run_tests
I0916 10:26:01.881] 
... skipping 97 lines ...
I0916 10:26:04.480] Context "test" modified.
I0916 10:26:04.486] +++ [0916 10:26:04] Testing kubectl create filter
I0916 10:26:04.576] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:04.791] (Bpod/selector-test-pod created
I0916 10:26:04.897] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0916 10:26:04.985] (BSuccessful
I0916 10:26:04.985] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0916 10:26:04.986] has:pods "selector-test-pod-dont-apply" not found
I0916 10:26:05.067] pod "selector-test-pod" deleted
I0916 10:26:05.088] +++ exit code: 0
I0916 10:26:05.121] Recording: run_kubectl_apply_deployments_tests
I0916 10:26:05.122] Running command: run_kubectl_apply_deployments_tests
I0916 10:26:05.145] 
... skipping 25 lines ...
I0916 10:26:06.973] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:07.066] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:07.160] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:07.325] (Bdeployment.apps/nginx created
I0916 10:26:07.431] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0916 10:26:11.655] (BSuccessful
I0916 10:26:11.655] message:Error from server (Conflict): error when applying patch:
I0916 10:26:11.656] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568629565-3837\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0916 10:26:11.656] to:
I0916 10:26:11.656] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0916 10:26:11.656] Name: "nginx", Namespace: "namespace-1568629565-3837"
I0916 10:26:11.658] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568629565-3837\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-16T10:26:07Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568629565-3837" "resourceVersion":"593" "selfLink":"/apis/apps/v1/namespaces/namespace-1568629565-3837/deployments/nginx" "uid":"2f3924ea-1110-4f4d-bfe3-6f2c212b6dbb"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-16T10:26:07Z" "lastUpdateTime":"2019-09-16T10:26:07Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-16T10:26:07Z" "lastUpdateTime":"2019-09-16T10:26:07Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0916 10:26:11.658] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0916 10:26:11.658] has:Error from server (Conflict)
W0916 10:26:11.759] I0916 10:26:07.329157   52896 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568629565-3837", Name:"nginx", UID:"2f3924ea-1110-4f4d-bfe3-6f2c212b6dbb", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0916 10:26:11.760] I0916 10:26:07.332134   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629565-3837", Name:"nginx-8484dd655", UID:"fa8edd3f-5922-45e7-b22c-e71f829d8e30", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-rst9r
W0916 10:26:11.760] I0916 10:26:07.334657   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629565-3837", Name:"nginx-8484dd655", UID:"fa8edd3f-5922-45e7-b22c-e71f829d8e30", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-rq95l
W0916 10:26:11.761] I0916 10:26:07.335164   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629565-3837", Name:"nginx-8484dd655", UID:"fa8edd3f-5922-45e7-b22c-e71f829d8e30", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-wdmxp
W0916 10:26:12.922] I0916 10:26:12.921603   52896 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568629555-9695
I0916 10:26:16.901] deployment.apps/nginx configured
... skipping 146 lines ...
I0916 10:26:24.429] +++ [0916 10:26:24] Creating namespace namespace-1568629584-2992
I0916 10:26:24.512] namespace/namespace-1568629584-2992 created
I0916 10:26:24.590] Context "test" modified.
I0916 10:26:24.597] +++ [0916 10:26:24] Testing kubectl get
I0916 10:26:24.706] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:24.810] (BSuccessful
I0916 10:26:24.811] message:Error from server (NotFound): pods "abc" not found
I0916 10:26:24.811] has:pods "abc" not found
I0916 10:26:24.906] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:25.003] (BSuccessful
I0916 10:26:25.003] message:Error from server (NotFound): pods "abc" not found
I0916 10:26:25.003] has:pods "abc" not found
I0916 10:26:25.103] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:25.195] (BSuccessful
I0916 10:26:25.196] message:{
I0916 10:26:25.197]     "apiVersion": "v1",
I0916 10:26:25.197]     "items": [],
... skipping 23 lines ...
I0916 10:26:25.567] has not:No resources found
I0916 10:26:25.674] Successful
I0916 10:26:25.674] message:NAME
I0916 10:26:25.674] has not:No resources found
I0916 10:26:25.773] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:25.886] (BSuccessful
I0916 10:26:25.887] message:error: the server doesn't have a resource type "foobar"
I0916 10:26:25.887] has not:No resources found
I0916 10:26:25.978] Successful
I0916 10:26:25.978] message:No resources found in namespace-1568629584-2992 namespace.
I0916 10:26:25.978] has:No resources found
I0916 10:26:26.073] Successful
I0916 10:26:26.074] message:
I0916 10:26:26.074] has not:No resources found
I0916 10:26:26.172] Successful
I0916 10:26:26.172] message:No resources found in namespace-1568629584-2992 namespace.
I0916 10:26:26.172] has:No resources found
I0916 10:26:26.270] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:26.366] (BSuccessful
I0916 10:26:26.367] message:Error from server (NotFound): pods "abc" not found
I0916 10:26:26.367] has:pods "abc" not found
I0916 10:26:26.368] FAIL!
I0916 10:26:26.369] message:Error from server (NotFound): pods "abc" not found
I0916 10:26:26.369] has not:List
I0916 10:26:26.369] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0916 10:26:26.494] Successful
I0916 10:26:26.495] message:I0916 10:26:26.441627   62902 loader.go:375] Config loaded from file:  /tmp/tmp.cZj11XPBMR/.kube/config
I0916 10:26:26.495] I0916 10:26:26.443314   62902 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0916 10:26:26.496] I0916 10:26:26.465603   62902 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0916 10:26:32.188] Successful
I0916 10:26:32.189] message:NAME    DATA   AGE
I0916 10:26:32.189] one     0      1s
I0916 10:26:32.189] three   0      0s
I0916 10:26:32.189] two     0      1s
I0916 10:26:32.189] STATUS    REASON          MESSAGE
I0916 10:26:32.190] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:26:32.190] has not:watch is only supported on individual resources
I0916 10:26:33.286] Successful
I0916 10:26:33.286] message:STATUS    REASON          MESSAGE
I0916 10:26:33.286] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:26:33.286] has not:watch is only supported on individual resources
I0916 10:26:33.293] +++ [0916 10:26:33] Creating namespace namespace-1568629593-2169
I0916 10:26:33.375] namespace/namespace-1568629593-2169 created
I0916 10:26:33.453] Context "test" modified.
I0916 10:26:33.553] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:33.725] (Bpod/valid-pod created
... skipping 56 lines ...
I0916 10:26:33.823] }
I0916 10:26:33.915] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 10:26:34.175] (B<no value>Successful
I0916 10:26:34.175] message:valid-pod:
I0916 10:26:34.175] has:valid-pod:
I0916 10:26:34.266] Successful
I0916 10:26:34.266] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0916 10:26:34.266] 	template was:
I0916 10:26:34.266] 		{.missing}
I0916 10:26:34.266] 	object given to jsonpath engine was:
I0916 10:26:34.267] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-16T10:26:33Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568629593-2169", "resourceVersion":"697", "selfLink":"/api/v1/namespaces/namespace-1568629593-2169/pods/valid-pod", "uid":"e6b4579e-37ca-4efb-b7a1-ef58e799d2d6"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0916 10:26:34.267] has:missing is not found
I0916 10:26:34.362] Successful
I0916 10:26:34.362] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0916 10:26:34.363] 	template was:
I0916 10:26:34.363] 		{{.missing}}
I0916 10:26:34.363] 	raw data was:
I0916 10:26:34.364] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-16T10:26:33Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568629593-2169","resourceVersion":"697","selfLink":"/api/v1/namespaces/namespace-1568629593-2169/pods/valid-pod","uid":"e6b4579e-37ca-4efb-b7a1-ef58e799d2d6"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0916 10:26:34.364] 	object given to template engine was:
I0916 10:26:34.365] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-16T10:26:33Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568629593-2169 resourceVersion:697 selfLink:/api/v1/namespaces/namespace-1568629593-2169/pods/valid-pod uid:e6b4579e-37ca-4efb-b7a1-ef58e799d2d6] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0916 10:26:34.365] has:map has no entry for key "missing"
W0916 10:26:34.465] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0916 10:26:35.453] Successful
I0916 10:26:35.454] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 10:26:35.454] valid-pod   0/1     Pending   0          1s
I0916 10:26:35.454] STATUS      REASON          MESSAGE
I0916 10:26:35.454] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:26:35.454] has:STATUS
I0916 10:26:35.456] Successful
I0916 10:26:35.456] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 10:26:35.457] valid-pod   0/1     Pending   0          1s
I0916 10:26:35.457] STATUS      REASON          MESSAGE
I0916 10:26:35.457] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:26:35.457] has:valid-pod
I0916 10:26:36.545] Successful
I0916 10:26:36.545] message:pod/valid-pod
I0916 10:26:36.545] has not:STATUS
I0916 10:26:36.547] Successful
I0916 10:26:36.548] message:pod/valid-pod
... skipping 72 lines ...
I0916 10:26:37.646] status:
I0916 10:26:37.646]   phase: Pending
I0916 10:26:37.646]   qosClass: Guaranteed
I0916 10:26:37.646] ---
I0916 10:26:37.646] has:name: valid-pod
I0916 10:26:37.732] Successful
I0916 10:26:37.732] message:Error from server (NotFound): pods "invalid-pod" not found
I0916 10:26:37.732] has:"invalid-pod" not found
I0916 10:26:37.820] pod "valid-pod" deleted
I0916 10:26:37.920] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:26:38.089] (Bpod/redis-master created
I0916 10:26:38.093] pod/valid-pod created
I0916 10:26:38.193] Successful
... skipping 35 lines ...
I0916 10:26:39.486] +++ command: run_kubectl_exec_pod_tests
I0916 10:26:39.497] +++ [0916 10:26:39] Creating namespace namespace-1568629599-9816
I0916 10:26:39.578] namespace/namespace-1568629599-9816 created
I0916 10:26:39.658] Context "test" modified.
I0916 10:26:39.666] +++ [0916 10:26:39] Testing kubectl exec POD COMMAND
I0916 10:26:39.757] Successful
I0916 10:26:39.758] message:Error from server (NotFound): pods "abc" not found
I0916 10:26:39.758] has:pods "abc" not found
I0916 10:26:39.925] pod/test-pod created
I0916 10:26:40.039] Successful
I0916 10:26:40.040] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 10:26:40.040] has not:pods "test-pod" not found
I0916 10:26:40.041] Successful
I0916 10:26:40.041] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 10:26:40.041] has not:pod or type/name must be specified
I0916 10:26:40.130] pod "test-pod" deleted
I0916 10:26:40.153] +++ exit code: 0
I0916 10:26:40.196] Recording: run_kubectl_exec_resource_name_tests
I0916 10:26:40.196] Running command: run_kubectl_exec_resource_name_tests
I0916 10:26:40.222] 
... skipping 2 lines ...
I0916 10:26:40.231] +++ command: run_kubectl_exec_resource_name_tests
I0916 10:26:40.245] +++ [0916 10:26:40] Creating namespace namespace-1568629600-30999
I0916 10:26:40.330] namespace/namespace-1568629600-30999 created
I0916 10:26:40.405] Context "test" modified.
I0916 10:26:40.413] +++ [0916 10:26:40] Testing kubectl exec TYPE/NAME COMMAND
I0916 10:26:40.521] Successful
I0916 10:26:40.521] message:error: the server doesn't have a resource type "foo"
I0916 10:26:40.521] has:error:
I0916 10:26:40.614] Successful
I0916 10:26:40.614] message:Error from server (NotFound): deployments.apps "bar" not found
I0916 10:26:40.614] has:"bar" not found
I0916 10:26:40.781] pod/test-pod created
I0916 10:26:40.966] replicaset.apps/frontend created
W0916 10:26:41.066] I0916 10:26:40.969737   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629600-30999", Name:"frontend", UID:"5029a583-510e-4e15-9e43-fe7e6578ad39", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vpjld
W0916 10:26:41.067] I0916 10:26:40.973144   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629600-30999", Name:"frontend", UID:"5029a583-510e-4e15-9e43-fe7e6578ad39", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-flnvn
W0916 10:26:41.067] I0916 10:26:40.973472   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629600-30999", Name:"frontend", UID:"5029a583-510e-4e15-9e43-fe7e6578ad39", APIVersion:"apps/v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s7mwj
I0916 10:26:41.168] configmap/test-set-env-config created
I0916 10:26:41.247] Successful
I0916 10:26:41.247] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0916 10:26:41.247] has:not implemented
I0916 10:26:41.349] Successful
I0916 10:26:41.349] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 10:26:41.349] has not:not found
I0916 10:26:41.351] Successful
I0916 10:26:41.352] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0916 10:26:41.352] has not:pod or type/name must be specified
I0916 10:26:41.462] Successful
I0916 10:26:41.463] message:Error from server (BadRequest): pod frontend-flnvn does not have a host assigned
I0916 10:26:41.463] has not:not found
I0916 10:26:41.466] Successful
I0916 10:26:41.466] message:Error from server (BadRequest): pod frontend-flnvn does not have a host assigned
I0916 10:26:41.466] has not:pod or type/name must be specified
I0916 10:26:41.549] pod "test-pod" deleted
I0916 10:26:41.639] replicaset.apps "frontend" deleted
I0916 10:26:41.733] configmap "test-set-env-config" deleted
I0916 10:26:41.757] +++ exit code: 0
I0916 10:26:41.799] Recording: run_create_secret_tests
I0916 10:26:41.800] Running command: run_create_secret_tests
I0916 10:26:41.827] 
I0916 10:26:41.830] +++ Running case: test-cmd.run_create_secret_tests 
I0916 10:26:41.833] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:26:41.836] +++ command: run_create_secret_tests
I0916 10:26:41.936] Successful
I0916 10:26:41.937] message:Error from server (NotFound): secrets "mysecret" not found
I0916 10:26:41.937] has:secrets "mysecret" not found
I0916 10:26:42.111] Successful
I0916 10:26:42.111] message:Error from server (NotFound): secrets "mysecret" not found
I0916 10:26:42.111] has:secrets "mysecret" not found
I0916 10:26:42.113] Successful
I0916 10:26:42.113] message:user-specified
I0916 10:26:42.114] has:user-specified
I0916 10:26:42.189] Successful
I0916 10:26:42.271] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"3a9b3df2-89c6-4be7-ab56-6f7ff9ab9e64","resourceVersion":"771","creationTimestamp":"2019-09-16T10:26:42Z"}}
... skipping 2 lines ...
I0916 10:26:42.457] has:uid
I0916 10:26:42.539] Successful
I0916 10:26:42.540] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"3a9b3df2-89c6-4be7-ab56-6f7ff9ab9e64","resourceVersion":"772","creationTimestamp":"2019-09-16T10:26:42Z"},"data":{"key1":"config1"}}
I0916 10:26:42.541] has:config1
I0916 10:26:42.616] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"3a9b3df2-89c6-4be7-ab56-6f7ff9ab9e64"}}
I0916 10:26:42.714] Successful
I0916 10:26:42.715] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0916 10:26:42.715] has:configmaps "tester-update-cm" not found
I0916 10:26:42.730] +++ exit code: 0
I0916 10:26:42.771] Recording: run_kubectl_create_kustomization_directory_tests
I0916 10:26:42.771] Running command: run_kubectl_create_kustomization_directory_tests
I0916 10:26:42.797] 
I0916 10:26:42.800] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I0916 10:26:45.658] valid-pod   0/1     Pending   0          0s
I0916 10:26:45.658] has:valid-pod
I0916 10:26:46.747] Successful
I0916 10:26:46.748] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 10:26:46.748] valid-pod   0/1     Pending   0          0s
I0916 10:26:46.748] STATUS      REASON          MESSAGE
I0916 10:26:46.749] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0916 10:26:46.749] has:Timeout exceeded while reading body
I0916 10:26:46.842] Successful
I0916 10:26:46.842] message:NAME        READY   STATUS    RESTARTS   AGE
I0916 10:26:46.842] valid-pod   0/1     Pending   0          1s
I0916 10:26:46.842] has:valid-pod
I0916 10:26:46.923] Successful
I0916 10:26:46.924] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0916 10:26:46.924] has:Invalid timeout value
I0916 10:26:47.013] pod "valid-pod" deleted
I0916 10:26:47.048] +++ exit code: 0
I0916 10:26:47.090] Recording: run_crd_tests
I0916 10:26:47.091] Running command: run_crd_tests
I0916 10:26:47.120] 
... skipping 158 lines ...
I0916 10:26:52.221] foo.company.com/test patched
I0916 10:26:52.323] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0916 10:26:52.409] (Bfoo.company.com/test patched
I0916 10:26:52.511] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0916 10:26:52.599] (Bfoo.company.com/test patched
I0916 10:26:52.700] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0916 10:26:52.869] (B+++ [0916 10:26:52] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0916 10:26:52.939] {
I0916 10:26:52.939]     "apiVersion": "company.com/v1",
I0916 10:26:52.940]     "kind": "Foo",
I0916 10:26:52.940]     "metadata": {
I0916 10:26:52.940]         "annotations": {
I0916 10:26:52.940]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 191 lines ...
I0916 10:27:16.930] (Bnamespace/non-native-resources created
I0916 10:27:17.106] bar.company.com/test created
I0916 10:27:17.215] crd.sh:455: Successful get bars {{len .items}}: 1
I0916 10:27:17.300] (Bnamespace "non-native-resources" deleted
I0916 10:27:22.527] crd.sh:458: Successful get bars {{len .items}}: 0
I0916 10:27:22.706] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0916 10:27:22.807] Error from server (NotFound): namespaces "non-native-resources" not found
I0916 10:27:22.908] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0916 10:27:22.917] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0916 10:27:23.019] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0916 10:27:23.049] +++ exit code: 0
I0916 10:27:23.091] Recording: run_cmd_with_img_tests
I0916 10:27:23.091] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0916 10:27:23.409] I0916 10:27:23.403417   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629643-30817", Name:"test1-6cdffdb5b8", UID:"062b9b99-9bf7-45cb-ba49-b23f25ee71ec", APIVersion:"apps/v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-z4rql
I0916 10:27:23.510] Successful
I0916 10:27:23.510] message:deployment.apps/test1 created
I0916 10:27:23.510] has:deployment.apps/test1 created
I0916 10:27:23.510] deployment.apps "test1" deleted
I0916 10:27:23.590] Successful
I0916 10:27:23.591] message:error: Invalid image name "InvalidImageName": invalid reference format
I0916 10:27:23.592] has:error: Invalid image name "InvalidImageName": invalid reference format
I0916 10:27:23.606] +++ exit code: 0
I0916 10:27:23.658] +++ [0916 10:27:23] Testing recursive resources
I0916 10:27:23.666] +++ [0916 10:27:23] Creating namespace namespace-1568629643-12121
I0916 10:27:23.765] namespace/namespace-1568629643-12121 created
I0916 10:27:23.849] Context "test" modified.
I0916 10:27:23.950] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:24.258] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:24.262] (BSuccessful
I0916 10:27:24.262] message:pod/busybox0 created
I0916 10:27:24.262] pod/busybox1 created
I0916 10:27:24.262] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 10:27:24.263] has:error validating data: kind not set
I0916 10:27:24.364] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:24.552] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0916 10:27:24.556] (BSuccessful
I0916 10:27:24.556] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:24.556] has:Object 'Kind' is missing
I0916 10:27:24.652] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:24.975] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0916 10:27:24.978] (BSuccessful
I0916 10:27:24.979] message:pod/busybox0 replaced
I0916 10:27:24.979] pod/busybox1 replaced
I0916 10:27:24.979] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 10:27:24.980] has:error validating data: kind not set
I0916 10:27:25.076] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:25.181] (BSuccessful
I0916 10:27:25.181] message:Name:         busybox0
I0916 10:27:25.181] Namespace:    namespace-1568629643-12121
I0916 10:27:25.182] Priority:     0
I0916 10:27:25.182] Node:         <none>
... skipping 159 lines ...
I0916 10:27:25.195] has:Object 'Kind' is missing
I0916 10:27:25.290] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:25.484] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0916 10:27:25.487] (BSuccessful
I0916 10:27:25.487] message:pod/busybox0 annotated
I0916 10:27:25.487] pod/busybox1 annotated
I0916 10:27:25.488] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:25.488] has:Object 'Kind' is missing
W0916 10:27:25.589] W0916 10:27:23.713440   49368 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 10:27:25.589] E0916 10:27:23.715400   52896 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:25.589] W0916 10:27:23.823260   49368 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 10:27:25.590] E0916 10:27:23.824851   52896 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:25.590] W0916 10:27:23.925507   49368 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 10:27:25.591] E0916 10:27:23.927310   52896 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:25.591] W0916 10:27:24.027597   49368 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0916 10:27:25.591] E0916 10:27:24.029045   52896 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:25.591] E0916 10:27:24.716800   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:25.592] E0916 10:27:24.826288   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:25.592] E0916 10:27:24.928878   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:25.592] E0916 10:27:25.030948   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:25.693] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:25.897] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0916 10:27:25.900] (BSuccessful
I0916 10:27:25.901] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0916 10:27:25.901] pod/busybox0 configured
I0916 10:27:25.901] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0916 10:27:25.902] pod/busybox1 configured
I0916 10:27:25.902] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0916 10:27:25.902] has:error validating data: kind not set
I0916 10:27:25.998] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:26.178] (Bdeployment.apps/nginx created
W0916 10:27:26.279] E0916 10:27:25.718142   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:26.280] E0916 10:27:25.827863   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:26.280] E0916 10:27:25.930554   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:26.280] E0916 10:27:26.032413   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:26.280] I0916 10:27:26.182816   52896 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568629643-12121", Name:"nginx", UID:"88df68cb-d570-4b75-b8bc-8ba14d3e2fee", APIVersion:"apps/v1", ResourceVersion:"955", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W0916 10:27:26.281] I0916 10:27:26.185313   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629643-12121", Name:"nginx-f87d999f7", UID:"d0d60eb2-eb39-4c49-93e3-17d72dba4247", APIVersion:"apps/v1", ResourceVersion:"956", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-vm6vp
W0916 10:27:26.281] I0916 10:27:26.188226   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629643-12121", Name:"nginx-f87d999f7", UID:"d0d60eb2-eb39-4c49-93e3-17d72dba4247", APIVersion:"apps/v1", ResourceVersion:"956", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-xpcrj
W0916 10:27:26.281] I0916 10:27:26.189271   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629643-12121", Name:"nginx-f87d999f7", UID:"d0d60eb2-eb39-4c49-93e3-17d72dba4247", APIVersion:"apps/v1", ResourceVersion:"956", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-2sxmc
I0916 10:27:26.382] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 10:27:26.388] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 41 lines ...
I0916 10:27:26.578]       terminationGracePeriodSeconds: 30
I0916 10:27:26.578] status: {}
I0916 10:27:26.578] has:extensions/v1beta1
I0916 10:27:26.662] deployment.apps "nginx" deleted
W0916 10:27:26.763] kubectl convert is DEPRECATED and will be removed in a future version.
W0916 10:27:26.764] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0916 10:27:26.765] E0916 10:27:26.719525   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:26.830] E0916 10:27:26.829637   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:26.931] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:26.952] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:26.956] (BSuccessful
I0916 10:27:26.956] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0916 10:27:26.957] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0916 10:27:26.957] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:26.957] has:Object 'Kind' is missing
I0916 10:27:27.057] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:27.150] (BSuccessful
I0916 10:27:27.151] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:27.151] has:busybox0:busybox1:
I0916 10:27:27.153] Successful
I0916 10:27:27.154] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:27.154] has:Object 'Kind' is missing
I0916 10:27:27.256] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:27.353] (Bpod/busybox0 labeled
I0916 10:27:27.353] pod/busybox1 labeled
I0916 10:27:27.354] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:27.448] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0916 10:27:27.450] (BSuccessful
I0916 10:27:27.451] message:pod/busybox0 labeled
I0916 10:27:27.451] pod/busybox1 labeled
I0916 10:27:27.452] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:27.452] has:Object 'Kind' is missing
I0916 10:27:27.547] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:27.640] (Bpod/busybox0 patched
I0916 10:27:27.640] pod/busybox1 patched
I0916 10:27:27.641] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:27.736] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0916 10:27:27.739] (BSuccessful
I0916 10:27:27.739] message:pod/busybox0 patched
I0916 10:27:27.740] pod/busybox1 patched
I0916 10:27:27.740] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:27.740] has:Object 'Kind' is missing
I0916 10:27:27.837] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:28.029] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:28.031] (BSuccessful
I0916 10:27:28.032] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 10:27:28.032] pod "busybox0" force deleted
I0916 10:27:28.032] pod "busybox1" force deleted
I0916 10:27:28.033] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0916 10:27:28.033] has:Object 'Kind' is missing
I0916 10:27:28.124] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:28.288] (Breplicationcontroller/busybox0 created
I0916 10:27:28.293] replicationcontroller/busybox1 created
W0916 10:27:28.394] E0916 10:27:26.932426   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:28.395] E0916 10:27:27.033791   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:28.395] I0916 10:27:27.412792   52896 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0916 10:27:28.396] E0916 10:27:27.720979   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:28.397] E0916 10:27:27.831524   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:28.397] E0916 10:27:27.933736   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:28.398] E0916 10:27:28.035428   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:28.398] I0916 10:27:28.292460   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629643-12121", Name:"busybox0", UID:"c2dc097e-9e0e-4531-8c1a-762d4b0978e3", APIVersion:"v1", ResourceVersion:"986", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-q5d64
W0916 10:27:28.399] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 10:27:28.400] I0916 10:27:28.296160   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629643-12121", Name:"busybox1", UID:"8200ea46-7fc3-4089-8676-0bb91182a762", APIVersion:"v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-scnpc
I0916 10:27:28.501] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:28.513] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:28.613] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 10:27:28.715] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 10:27:28.931] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0916 10:27:29.035] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0916 10:27:29.037] (BSuccessful
I0916 10:27:29.037] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0916 10:27:29.038] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0916 10:27:29.038] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:29.038] has:Object 'Kind' is missing
I0916 10:27:29.136] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0916 10:27:29.229] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0916 10:27:29.335] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:29.430] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 10:27:29.526] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 10:27:29.728] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 10:27:29.824] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 10:27:29.826] (BSuccessful
I0916 10:27:29.826] message:service/busybox0 exposed
I0916 10:27:29.827] service/busybox1 exposed
I0916 10:27:29.827] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:29.827] has:Object 'Kind' is missing
I0916 10:27:29.923] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:30.019] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0916 10:27:30.115] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0916 10:27:30.331] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0916 10:27:30.429] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0916 10:27:30.432] (BSuccessful
I0916 10:27:30.432] message:replicationcontroller/busybox0 scaled
I0916 10:27:30.432] replicationcontroller/busybox1 scaled
I0916 10:27:30.433] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:30.433] has:Object 'Kind' is missing
I0916 10:27:30.528] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:30.720] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:30.723] (BSuccessful
I0916 10:27:30.723] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0916 10:27:30.724] replicationcontroller "busybox0" force deleted
I0916 10:27:30.724] replicationcontroller "busybox1" force deleted
I0916 10:27:30.725] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:30.725] has:Object 'Kind' is missing
I0916 10:27:30.821] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:30.991] (Bdeployment.apps/nginx1-deployment created
I0916 10:27:30.994] deployment.apps/nginx0-deployment created
W0916 10:27:31.095] E0916 10:27:28.722298   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.096] E0916 10:27:28.832971   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.096] E0916 10:27:28.937348   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.097] E0916 10:27:29.036933   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.097] E0916 10:27:29.723626   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.097] E0916 10:27:29.834611   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.098] E0916 10:27:29.938898   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.098] E0916 10:27:30.038500   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.099] I0916 10:27:30.213458   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629643-12121", Name:"busybox0", UID:"c2dc097e-9e0e-4531-8c1a-762d4b0978e3", APIVersion:"v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-c7bf5
W0916 10:27:31.099] I0916 10:27:30.225231   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629643-12121", Name:"busybox1", UID:"8200ea46-7fc3-4089-8676-0bb91182a762", APIVersion:"v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qnq8x
W0916 10:27:31.099] E0916 10:27:30.726278   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.100] E0916 10:27:30.836066   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.100] E0916 10:27:30.940459   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.100] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 10:27:31.101] I0916 10:27:30.994924   52896 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568629643-12121", Name:"nginx1-deployment", UID:"09f967d8-8a61-4d8a-837d-c45ac2f6da65", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W0916 10:27:31.101] I0916 10:27:30.998869   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629643-12121", Name:"nginx1-deployment-7bdbbfb5cf", UID:"57f5f44d-92d5-4a58-9d3b-d1c6928ebad9", APIVersion:"apps/v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-cn5kp
W0916 10:27:31.102] I0916 10:27:30.999384   52896 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568629643-12121", Name:"nginx0-deployment", UID:"27ec6c51-5f3a-4e46-88b3-dcc09fd04bba", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W0916 10:27:31.102] I0916 10:27:31.003628   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629643-12121", Name:"nginx0-deployment-57c6bff7f6", UID:"85fec791-c17d-4912-bd2b-247ff426f61b", APIVersion:"apps/v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-24ctd
W0916 10:27:31.103] I0916 10:27:31.003693   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629643-12121", Name:"nginx1-deployment-7bdbbfb5cf", UID:"57f5f44d-92d5-4a58-9d3b-d1c6928ebad9", APIVersion:"apps/v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-d4d7m
W0916 10:27:31.103] I0916 10:27:31.006895   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629643-12121", Name:"nginx0-deployment-57c6bff7f6", UID:"85fec791-c17d-4912-bd2b-247ff426f61b", APIVersion:"apps/v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-q7jqq
W0916 10:27:31.103] E0916 10:27:31.040055   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:31.204] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0916 10:27:31.223] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0916 10:27:31.432] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0916 10:27:31.435] (BSuccessful
I0916 10:27:31.435] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0916 10:27:31.435] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0916 10:27:31.436] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:27:31.436] has:Object 'Kind' is missing
I0916 10:27:31.535] deployment.apps/nginx1-deployment paused
I0916 10:27:31.539] deployment.apps/nginx0-deployment paused
I0916 10:27:31.652] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0916 10:27:31.655] (BSuccessful
I0916 10:27:31.655] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:27:31.655] has:Object 'Kind' is missing
I0916 10:27:31.749] deployment.apps/nginx1-deployment resumed
I0916 10:27:31.752] deployment.apps/nginx0-deployment resumed
W0916 10:27:31.853] E0916 10:27:31.727676   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.854] E0916 10:27:31.837482   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:31.942] E0916 10:27:31.942033   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:32.042] E0916 10:27:32.041375   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:32.059] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 10:27:32.075] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:27:32.176] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0916 10:27:32.176] (BSuccessful
I0916 10:27:32.177] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:27:32.177] has:Object 'Kind' is missing
I0916 10:27:32.177] Successful
I0916 10:27:32.177] message:deployment.apps/nginx1-deployment 
I0916 10:27:32.177] REVISION  CHANGE-CAUSE
I0916 10:27:32.177] 1         <none>
I0916 10:27:32.178] 
I0916 10:27:32.178] deployment.apps/nginx0-deployment 
I0916 10:27:32.178] REVISION  CHANGE-CAUSE
I0916 10:27:32.178] 1         <none>
I0916 10:27:32.178] 
I0916 10:27:32.178] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:27:32.178] has:nginx0-deployment
I0916 10:27:32.178] Successful
I0916 10:27:32.178] message:deployment.apps/nginx1-deployment 
I0916 10:27:32.179] REVISION  CHANGE-CAUSE
I0916 10:27:32.179] 1         <none>
I0916 10:27:32.179] 
I0916 10:27:32.179] deployment.apps/nginx0-deployment 
I0916 10:27:32.179] REVISION  CHANGE-CAUSE
I0916 10:27:32.179] 1         <none>
I0916 10:27:32.179] 
I0916 10:27:32.179] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:27:32.180] has:nginx1-deployment
I0916 10:27:32.180] Successful
I0916 10:27:32.180] message:deployment.apps/nginx1-deployment 
I0916 10:27:32.180] REVISION  CHANGE-CAUSE
I0916 10:27:32.180] 1         <none>
I0916 10:27:32.180] 
I0916 10:27:32.180] deployment.apps/nginx0-deployment 
I0916 10:27:32.180] REVISION  CHANGE-CAUSE
I0916 10:27:32.180] 1         <none>
I0916 10:27:32.180] 
I0916 10:27:32.181] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0916 10:27:32.181] has:Object 'Kind' is missing
I0916 10:27:32.181] deployment.apps "nginx1-deployment" force deleted
I0916 10:27:32.181] deployment.apps "nginx0-deployment" force deleted
W0916 10:27:32.729] E0916 10:27:32.729088   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:32.839] E0916 10:27:32.838978   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:32.944] E0916 10:27:32.943688   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:33.043] E0916 10:27:33.042947   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:33.182] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:33.347] (Breplicationcontroller/busybox0 created
I0916 10:27:33.351] replicationcontroller/busybox1 created
W0916 10:27:33.452] I0916 10:27:33.351157   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629643-12121", Name:"busybox0", UID:"5f2501b1-bee1-480a-953c-3f868724d17d", APIVersion:"v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-fvhf2
W0916 10:27:33.453] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0916 10:27:33.454] I0916 10:27:33.355713   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629643-12121", Name:"busybox1", UID:"787cfab8-badb-4d93-a7f0-b3c5bd46cca2", APIVersion:"v1", ResourceVersion:"1078", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-s47c7
I0916 10:27:33.554] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0916 10:27:33.562] (BSuccessful
I0916 10:27:33.563] message:no rollbacker has been implemented for "ReplicationController"
I0916 10:27:33.564] no rollbacker has been implemented for "ReplicationController"
I0916 10:27:33.564] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0916 10:27:33.566] message:no rollbacker has been implemented for "ReplicationController"
I0916 10:27:33.566] no rollbacker has been implemented for "ReplicationController"
I0916 10:27:33.567] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:33.567] has:Object 'Kind' is missing
I0916 10:27:33.670] Successful
I0916 10:27:33.671] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:33.671] error: replicationcontrollers "busybox0" pausing is not supported
I0916 10:27:33.671] error: replicationcontrollers "busybox1" pausing is not supported
I0916 10:27:33.671] has:Object 'Kind' is missing
I0916 10:27:33.673] Successful
I0916 10:27:33.674] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:33.674] error: replicationcontrollers "busybox0" pausing is not supported
I0916 10:27:33.674] error: replicationcontrollers "busybox1" pausing is not supported
I0916 10:27:33.674] has:replicationcontrollers "busybox0" pausing is not supported
I0916 10:27:33.676] Successful
I0916 10:27:33.677] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:33.677] error: replicationcontrollers "busybox0" pausing is not supported
I0916 10:27:33.677] error: replicationcontrollers "busybox1" pausing is not supported
I0916 10:27:33.677] has:replicationcontrollers "busybox1" pausing is not supported
I0916 10:27:33.776] Successful
I0916 10:27:33.777] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:33.777] error: replicationcontrollers "busybox0" resuming is not supported
I0916 10:27:33.777] error: replicationcontrollers "busybox1" resuming is not supported
I0916 10:27:33.777] has:Object 'Kind' is missing
I0916 10:27:33.779] Successful
I0916 10:27:33.779] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:33.779] error: replicationcontrollers "busybox0" resuming is not supported
I0916 10:27:33.780] error: replicationcontrollers "busybox1" resuming is not supported
I0916 10:27:33.780] has:replicationcontrollers "busybox0" resuming is not supported
I0916 10:27:33.782] Successful
I0916 10:27:33.783] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0916 10:27:33.783] error: replicationcontrollers "busybox0" resuming is not supported
I0916 10:27:33.783] error: replicationcontrollers "busybox1" resuming is not supported
I0916 10:27:33.783] has:replicationcontrollers "busybox0" resuming is not supported
I0916 10:27:33.864] replicationcontroller "busybox0" force deleted
I0916 10:27:33.869] replicationcontroller "busybox1" force deleted
W0916 10:27:33.970] E0916 10:27:33.730521   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:33.970] E0916 10:27:33.840660   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:33.971] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 10:27:33.971] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0916 10:27:33.971] E0916 10:27:33.945183   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:34.045] E0916 10:27:34.044381   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:34.732] E0916 10:27:34.732062   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:34.842] E0916 10:27:34.842146   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:34.943] Recording: run_namespace_tests
I0916 10:27:34.943] Running command: run_namespace_tests
I0916 10:27:34.943] 
I0916 10:27:34.944] +++ Running case: test-cmd.run_namespace_tests 
I0916 10:27:34.944] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:27:34.944] +++ command: run_namespace_tests
I0916 10:27:34.944] +++ [0916 10:27:34] Testing kubectl(v1:namespaces)
I0916 10:27:35.003] namespace/my-namespace created
W0916 10:27:35.103] E0916 10:27:34.946891   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:35.104] E0916 10:27:35.045895   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:35.205] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0916 10:27:35.205] (Bnamespace "my-namespace" deleted
W0916 10:27:35.734] E0916 10:27:35.733623   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:35.844] E0916 10:27:35.843792   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:35.949] E0916 10:27:35.948522   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:36.048] E0916 10:27:36.047451   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:36.735] E0916 10:27:36.735234   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:36.845] E0916 10:27:36.845286   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:36.950] E0916 10:27:36.949971   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:37.049] E0916 10:27:37.049032   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:37.737] E0916 10:27:37.736741   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:37.847] E0916 10:27:37.846955   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:37.952] E0916 10:27:37.951437   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:38.051] E0916 10:27:38.050568   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:38.739] E0916 10:27:38.739108   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:38.849] E0916 10:27:38.848496   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:38.953] E0916 10:27:38.952832   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:39.052] E0916 10:27:39.052163   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:39.741] E0916 10:27:39.740736   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:39.850] E0916 10:27:39.849943   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:39.955] E0916 10:27:39.954413   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:40.055] E0916 10:27:40.054626   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:40.220] I0916 10:27:40.219723   52896 shared_informer.go:197] Waiting for caches to sync for resource quota
W0916 10:27:40.220] I0916 10:27:40.219786   52896 shared_informer.go:204] Caches are synced for resource quota 
I0916 10:27:40.321] namespace/my-namespace condition met
I0916 10:27:40.412] Successful
I0916 10:27:40.412] message:Error from server (NotFound): namespaces "my-namespace" not found
I0916 10:27:40.413] has: not found
I0916 10:27:40.490] namespace/my-namespace created
I0916 10:27:40.590] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0916 10:27:40.819] (BSuccessful
I0916 10:27:40.820] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0916 10:27:40.820] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0916 10:27:40.830] namespace "namespace-1568629603-4341" deleted
I0916 10:27:40.830] namespace "namespace-1568629604-29282" deleted
I0916 10:27:40.830] namespace "namespace-1568629607-18878" deleted
I0916 10:27:40.830] namespace "namespace-1568629608-1977" deleted
I0916 10:27:40.831] namespace "namespace-1568629643-12121" deleted
I0916 10:27:40.831] namespace "namespace-1568629643-30817" deleted
I0916 10:27:40.831] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0916 10:27:40.832] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0916 10:27:40.832] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0916 10:27:40.832] has:warning: deleting cluster-scoped resources
I0916 10:27:40.832] Successful
I0916 10:27:40.833] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0916 10:27:40.833] namespace "kube-node-lease" deleted
I0916 10:27:40.833] namespace "my-namespace" deleted
I0916 10:27:40.833] namespace "namespace-1568629508-9570" deleted
... skipping 27 lines ...
I0916 10:27:40.840] namespace "namespace-1568629603-4341" deleted
I0916 10:27:40.841] namespace "namespace-1568629604-29282" deleted
I0916 10:27:40.841] namespace "namespace-1568629607-18878" deleted
I0916 10:27:40.841] namespace "namespace-1568629608-1977" deleted
I0916 10:27:40.841] namespace "namespace-1568629643-12121" deleted
I0916 10:27:40.841] namespace "namespace-1568629643-30817" deleted
I0916 10:27:40.842] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0916 10:27:40.842] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0916 10:27:40.842] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0916 10:27:40.842] has:namespace "my-namespace" deleted
I0916 10:27:40.931] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0916 10:27:41.016] (Bnamespace/other created
I0916 10:27:41.118] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0916 10:27:41.221] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:41.398] (Bpod/valid-pod created
W0916 10:27:41.499] I0916 10:27:40.629395   52896 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0916 10:27:41.500] I0916 10:27:40.629467   52896 shared_informer.go:204] Caches are synced for garbage collector 
W0916 10:27:41.500] E0916 10:27:40.741945   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:41.501] E0916 10:27:40.851802   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:41.501] E0916 10:27:40.955924   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:41.502] E0916 10:27:41.056110   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:41.602] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 10:27:41.606] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 10:27:41.694] (BSuccessful
I0916 10:27:41.695] message:error: a resource cannot be retrieved by name across all namespaces
I0916 10:27:41.695] has:a resource cannot be retrieved by name across all namespaces
I0916 10:27:41.792] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0916 10:27:41.879] (Bpod "valid-pod" force deleted
I0916 10:27:41.983] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:42.064] (Bnamespace "other" deleted
W0916 10:27:42.166] E0916 10:27:41.743293   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:42.166] E0916 10:27:41.853600   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:42.166] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0916 10:27:42.166] E0916 10:27:41.958523   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:42.167] E0916 10:27:42.057627   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:42.746] E0916 10:27:42.745341   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:42.856] E0916 10:27:42.855535   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:42.960] E0916 10:27:42.960201   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:43.060] E0916 10:27:43.059500   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:43.747] E0916 10:27:43.747060   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:43.797] I0916 10:27:43.796792   52896 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1568629643-12121
W0916 10:27:43.805] I0916 10:27:43.804482   52896 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1568629643-12121
W0916 10:27:43.857] E0916 10:27:43.857004   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:43.962] E0916 10:27:43.961890   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:44.061] E0916 10:27:44.060966   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:44.749] E0916 10:27:44.748754   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:44.859] E0916 10:27:44.858622   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:44.963] E0916 10:27:44.963227   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:45.063] E0916 10:27:45.062571   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:45.759] E0916 10:27:45.758799   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:45.861] E0916 10:27:45.860403   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:45.980] E0916 10:27:45.979342   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:46.065] E0916 10:27:46.064539   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:46.761] E0916 10:27:46.760812   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:46.863] E0916 10:27:46.863080   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:46.981] E0916 10:27:46.980955   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:47.066] E0916 10:27:47.065676   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:47.187] +++ exit code: 0
I0916 10:27:47.229] Recording: run_secrets_test
I0916 10:27:47.229] Running command: run_secrets_test
I0916 10:27:47.256] 
I0916 10:27:47.260] +++ Running case: test-cmd.run_secrets_test 
I0916 10:27:47.263] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 44 lines ...
I0916 10:27:47.922] (Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:48.002] (Bsecret/test-secret created
I0916 10:27:48.102] core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0916 10:27:48.199] (Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
I0916 10:27:48.376] (Bsecret "test-secret" deleted
W0916 10:27:48.477] I0916 10:27:47.521587   69128 loader.go:375] Config loaded from file:  /tmp/tmp.cZj11XPBMR/.kube/config
W0916 10:27:48.477] E0916 10:27:47.762240   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:48.478] E0916 10:27:47.864723   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:48.478] E0916 10:27:47.982470   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:48.478] E0916 10:27:48.067182   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:48.579] core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:48.579] (Bsecret/test-secret created
I0916 10:27:48.673] core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0916 10:27:48.788] (Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
I0916 10:27:48.980] (Bsecret "test-secret" deleted
W0916 10:27:49.080] E0916 10:27:48.763696   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:49.081] E0916 10:27:48.866239   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:49.081] E0916 10:27:48.983815   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:49.081] E0916 10:27:49.068536   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:49.182] core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:49.182] (Bsecret/test-secret created
I0916 10:27:49.276] core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0916 10:27:49.369] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0916 10:27:49.451] (Bsecret "test-secret" deleted
I0916 10:27:49.540] secret/test-secret created
I0916 10:27:49.641] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0916 10:27:49.735] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0916 10:27:49.819] (Bsecret "test-secret" deleted
W0916 10:27:49.919] E0916 10:27:49.765375   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:49.920] E0916 10:27:49.867792   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:49.986] E0916 10:27:49.985269   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:50.070] E0916 10:27:50.070176   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:50.171] secret/secret-string-data created
I0916 10:27:50.171] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0916 10:27:50.197] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0916 10:27:50.293] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0916 10:27:50.378] (Bsecret "secret-string-data" deleted
I0916 10:27:50.482] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:27:50.655] (Bsecret "test-secret" deleted
I0916 10:27:50.745] namespace "test-secrets" deleted
W0916 10:27:50.846] I0916 10:27:50.402878   52896 namespace_controller.go:171] Namespace has been deleted my-namespace
W0916 10:27:50.846] E0916 10:27:50.767022   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:50.870] E0916 10:27:50.869320   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:50.883] I0916 10:27:50.882973   52896 namespace_controller.go:171] Namespace has been deleted kube-node-lease
W0916 10:27:50.895] I0916 10:27:50.894419   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629508-9570
W0916 10:27:50.913] I0916 10:27:50.912494   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629529-17029
W0916 10:27:50.913] I0916 10:27:50.912619   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629523-8073
W0916 10:27:50.923] I0916 10:27:50.923119   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629527-1158
W0916 10:27:50.925] I0916 10:27:50.924891   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629523-19347
W0916 10:27:50.926] I0916 10:27:50.925651   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629528-10509
W0916 10:27:50.930] I0916 10:27:50.929562   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629514-13408
W0916 10:27:50.931] I0916 10:27:50.930683   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629511-26498
W0916 10:27:50.987] E0916 10:27:50.987029   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:50.993] I0916 10:27:50.993214   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629520-6392
W0916 10:27:51.072] E0916 10:27:51.071809   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:51.098] I0916 10:27:51.097657   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629538-18643
W0916 10:27:51.105] I0916 10:27:51.105127   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629551-30705
W0916 10:27:51.112] I0916 10:27:51.112391   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629552-29440
W0916 10:27:51.118] I0916 10:27:51.118012   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629539-5570
W0916 10:27:51.128] I0916 10:27:51.128243   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629554-853
W0916 10:27:51.129] I0916 10:27:51.128267   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629555-35
... skipping 13 lines ...
W0916 10:27:51.423] I0916 10:27:51.422674   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629603-4341
W0916 10:27:51.483] I0916 10:27:51.482660   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629604-29282
W0916 10:27:51.495] I0916 10:27:51.494633   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629607-18878
W0916 10:27:51.509] I0916 10:27:51.508501   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629608-1977
W0916 10:27:51.511] I0916 10:27:51.510780   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629643-30817
W0916 10:27:51.568] I0916 10:27:51.568154   52896 namespace_controller.go:171] Namespace has been deleted namespace-1568629643-12121
W0916 10:27:51.769] E0916 10:27:51.768549   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:51.871] E0916 10:27:51.870958   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:51.989] E0916 10:27:51.988696   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:52.074] E0916 10:27:52.073589   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:52.158] I0916 10:27:52.157582   52896 namespace_controller.go:171] Namespace has been deleted other
W0916 10:27:52.770] E0916 10:27:52.769973   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:52.873] E0916 10:27:52.872506   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:52.991] E0916 10:27:52.990385   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:53.076] E0916 10:27:53.075261   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:53.772] E0916 10:27:53.771457   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:53.874] E0916 10:27:53.873924   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:53.992] E0916 10:27:53.991878   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:54.077] E0916 10:27:54.076698   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:54.773] E0916 10:27:54.772850   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:54.876] E0916 10:27:54.875309   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:54.994] E0916 10:27:54.993312   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:55.079] E0916 10:27:55.078306   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:55.775] E0916 10:27:55.774639   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:55.877] E0916 10:27:55.877293   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:55.978] +++ exit code: 0
I0916 10:27:55.978] Recording: run_configmap_tests
I0916 10:27:55.979] Running command: run_configmap_tests
I0916 10:27:55.979] 
I0916 10:27:55.979] +++ Running case: test-cmd.run_configmap_tests 
I0916 10:27:55.979] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:27:55.979] +++ command: run_configmap_tests
I0916 10:27:55.979] +++ [0916 10:27:55] Creating namespace namespace-1568629675-13471
I0916 10:27:56.058] namespace/namespace-1568629675-13471 created
I0916 10:27:56.132] Context "test" modified.
I0916 10:27:56.139] +++ [0916 10:27:56] Testing configmaps
W0916 10:27:56.240] E0916 10:27:55.994821   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:56.241] E0916 10:27:56.079880   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:27:56.348] configmap/test-configmap created
I0916 10:27:56.455] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0916 10:27:56.541] (Bconfigmap "test-configmap" deleted
I0916 10:27:56.650] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0916 10:27:56.728] (Bnamespace/test-configmaps created
I0916 10:27:56.832] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0916 10:27:57.196] configmap/test-binary-configmap created
I0916 10:27:57.294] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0916 10:27:57.389] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0916 10:27:57.648] (Bconfigmap "test-configmap" deleted
I0916 10:27:57.734] configmap "test-binary-configmap" deleted
I0916 10:27:57.823] namespace "test-configmaps" deleted
W0916 10:27:57.924] E0916 10:27:56.776298   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:57.924] E0916 10:27:56.878868   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:57.925] E0916 10:27:56.996666   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:57.925] E0916 10:27:57.081215   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:57.925] E0916 10:27:57.777874   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:57.926] E0916 10:27:57.880562   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:57.998] E0916 10:27:57.998053   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:58.083] E0916 10:27:58.082685   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:58.780] E0916 10:27:58.779401   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:58.882] E0916 10:27:58.882100   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:59.000] E0916 10:27:58.999456   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:59.084] E0916 10:27:59.084123   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:59.781] E0916 10:27:59.780715   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:27:59.886] E0916 10:27:59.883732   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:00.001] E0916 10:28:00.000912   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:00.086] E0916 10:28:00.085622   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:00.782] E0916 10:28:00.782171   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:00.849] I0916 10:28:00.848547   52896 namespace_controller.go:171] Namespace has been deleted test-secrets
W0916 10:28:00.886] E0916 10:28:00.885246   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:01.005] E0916 10:28:01.004379   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:01.087] E0916 10:28:01.087045   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:01.784] E0916 10:28:01.783682   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:01.887] E0916 10:28:01.886748   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:02.006] E0916 10:28:02.005794   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:02.089] E0916 10:28:02.088469   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:02.786] E0916 10:28:02.785240   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:02.888] E0916 10:28:02.887825   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:02.989] +++ exit code: 0
I0916 10:28:02.992] Recording: run_client_config_tests
I0916 10:28:02.992] Running command: run_client_config_tests
I0916 10:28:03.020] 
I0916 10:28:03.023] +++ Running case: test-cmd.run_client_config_tests 
I0916 10:28:03.026] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:28:03.030] +++ command: run_client_config_tests
I0916 10:28:03.043] +++ [0916 10:28:03] Creating namespace namespace-1568629683-29673
I0916 10:28:03.124] namespace/namespace-1568629683-29673 created
I0916 10:28:03.202] Context "test" modified.
I0916 10:28:03.210] +++ [0916 10:28:03] Testing client config
I0916 10:28:03.287] Successful
I0916 10:28:03.288] message:error: stat missing: no such file or directory
I0916 10:28:03.288] has:missing: no such file or directory
I0916 10:28:03.364] Successful
I0916 10:28:03.365] message:error: stat missing: no such file or directory
I0916 10:28:03.365] has:missing: no such file or directory
I0916 10:28:03.442] Successful
I0916 10:28:03.443] message:error: stat missing: no such file or directory
I0916 10:28:03.443] has:missing: no such file or directory
I0916 10:28:03.524] Successful
I0916 10:28:03.524] message:Error in configuration: context was not found for specified context: missing-context
I0916 10:28:03.525] has:context was not found for specified context: missing-context
I0916 10:28:03.603] Successful
I0916 10:28:03.603] message:error: no server found for cluster "missing-cluster"
I0916 10:28:03.604] has:no server found for cluster "missing-cluster"
I0916 10:28:03.682] Successful
I0916 10:28:03.683] message:error: auth info "missing-user" does not exist
I0916 10:28:03.683] has:auth info "missing-user" does not exist
W0916 10:28:03.783] E0916 10:28:03.007192   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:03.784] E0916 10:28:03.090124   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:03.787] E0916 10:28:03.786536   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:03.887] Successful
I0916 10:28:03.888] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0916 10:28:03.888] has:error loading config file
I0916 10:28:03.911] Successful
I0916 10:28:03.912] message:error: stat missing-config: no such file or directory
I0916 10:28:03.912] has:no such file or directory
I0916 10:28:03.926] +++ exit code: 0
I0916 10:28:03.967] Recording: run_service_accounts_tests
I0916 10:28:03.967] Running command: run_service_accounts_tests
I0916 10:28:03.993] 
I0916 10:28:03.996] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0916 10:28:04.358] (Bnamespace/test-service-accounts created
I0916 10:28:04.460] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0916 10:28:04.537] (Bserviceaccount/test-service-account created
I0916 10:28:04.640] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0916 10:28:04.721] (Bserviceaccount "test-service-account" deleted
I0916 10:28:04.810] namespace "test-service-accounts" deleted
W0916 10:28:04.911] E0916 10:28:03.889732   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:04.912] E0916 10:28:04.009385   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:04.912] E0916 10:28:04.091618   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:04.912] E0916 10:28:04.787960   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:04.913] E0916 10:28:04.891590   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:05.011] E0916 10:28:05.010996   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:05.093] E0916 10:28:05.093100   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:05.790] E0916 10:28:05.789476   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:05.893] E0916 10:28:05.893067   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:06.013] E0916 10:28:06.012687   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:06.095] E0916 10:28:06.094563   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:06.791] E0916 10:28:06.790913   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:06.895] E0916 10:28:06.894966   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:07.014] E0916 10:28:07.014059   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:07.097] E0916 10:28:07.096430   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:07.793] E0916 10:28:07.792418   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:07.897] E0916 10:28:07.896633   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:07.924] I0916 10:28:07.923808   52896 namespace_controller.go:171] Namespace has been deleted test-configmaps
W0916 10:28:08.016] E0916 10:28:08.015530   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:08.098] E0916 10:28:08.098136   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:08.794] E0916 10:28:08.793876   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:08.899] E0916 10:28:08.898300   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:09.017] E0916 10:28:09.017170   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:09.103] E0916 10:28:09.102851   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:09.796] E0916 10:28:09.795384   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:09.900] E0916 10:28:09.899423   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:10.000] +++ exit code: 0
I0916 10:28:10.001] Recording: run_job_tests
I0916 10:28:10.001] Running command: run_job_tests
I0916 10:28:10.006] 
I0916 10:28:10.010] +++ Running case: test-cmd.run_job_tests 
I0916 10:28:10.013] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0916 10:28:10.859] Labels:                        run=pi
I0916 10:28:10.859] Annotations:                   <none>
I0916 10:28:10.859] Schedule:                      59 23 31 2 *
I0916 10:28:10.859] Concurrency Policy:            Allow
I0916 10:28:10.859] Suspend:                       False
I0916 10:28:10.859] Successful Job History Limit:  3
I0916 10:28:10.860] Failed Job History Limit:      1
I0916 10:28:10.860] Starting Deadline Seconds:     <unset>
I0916 10:28:10.860] Selector:                      <unset>
I0916 10:28:10.860] Parallelism:                   <unset>
I0916 10:28:10.860] Completions:                   <unset>
I0916 10:28:10.860] Pod Template:
I0916 10:28:10.860]   Labels:  run=pi
... skipping 32 lines ...
I0916 10:28:11.433]                 run=pi
I0916 10:28:11.433] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0916 10:28:11.433] Controlled By:  CronJob/pi
I0916 10:28:11.433] Parallelism:    1
I0916 10:28:11.433] Completions:    1
I0916 10:28:11.433] Start Time:     Mon, 16 Sep 2019 10:28:11 +0000
I0916 10:28:11.433] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0916 10:28:11.433] Pod Template:
I0916 10:28:11.433]   Labels:  controller-uid=89471b30-bb02-4299-930b-aa4884e65ce2
I0916 10:28:11.434]            job-name=test-job
I0916 10:28:11.434]            run=pi
I0916 10:28:11.434]   Containers:
I0916 10:28:11.434]    pi:
... skipping 15 lines ...
I0916 10:28:11.435]   Type    Reason            Age   From            Message
I0916 10:28:11.435]   ----    ------            ----  ----            -------
I0916 10:28:11.435]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-c6t4w
I0916 10:28:11.519] job.batch "test-job" deleted
I0916 10:28:11.613] cronjob.batch "pi" deleted
I0916 10:28:11.702] namespace "test-jobs" deleted
W0916 10:28:11.802] E0916 10:28:10.018536   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:11.803] E0916 10:28:10.104239   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:11.803] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 10:28:11.803] E0916 10:28:10.796824   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:11.803] E0916 10:28:10.901429   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:11.804] E0916 10:28:11.020669   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:11.804] E0916 10:28:11.105815   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:11.804] I0916 10:28:11.148666   52896 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"89471b30-bb02-4299-930b-aa4884e65ce2", APIVersion:"batch/v1", ResourceVersion:"1399", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-c6t4w
W0916 10:28:11.804] E0916 10:28:11.798281   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:11.903] E0916 10:28:11.902934   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:12.023] E0916 10:28:12.022555   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:12.108] E0916 10:28:12.107319   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:12.800] E0916 10:28:12.799642   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:12.905] E0916 10:28:12.904646   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:13.024] E0916 10:28:13.024048   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:13.109] E0916 10:28:13.108722   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:13.801] E0916 10:28:13.801035   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:13.906] E0916 10:28:13.906193   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:14.026] E0916 10:28:14.025603   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:14.110] E0916 10:28:14.110069   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:14.803] E0916 10:28:14.802581   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:14.908] E0916 10:28:14.907627   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:14.910] I0916 10:28:14.910076   52896 namespace_controller.go:171] Namespace has been deleted test-service-accounts
W0916 10:28:15.027] E0916 10:28:15.027064   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:15.112] E0916 10:28:15.111616   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:15.804] E0916 10:28:15.804134   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:15.909] E0916 10:28:15.909284   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:16.029] E0916 10:28:16.028534   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:16.113] E0916 10:28:16.113182   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:16.805] E0916 10:28:16.805105   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:16.906] +++ exit code: 0
I0916 10:28:16.906] Recording: run_create_job_tests
I0916 10:28:16.907] Running command: run_create_job_tests
I0916 10:28:16.907] 
I0916 10:28:16.907] +++ Running case: test-cmd.run_create_job_tests 
I0916 10:28:16.907] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:28:16.907] +++ command: run_create_job_tests
I0916 10:28:16.914] +++ [0916 10:28:16] Creating namespace namespace-1568629696-4573
I0916 10:28:16.992] namespace/namespace-1568629696-4573 created
I0916 10:28:17.068] Context "test" modified.
I0916 10:28:17.161] job.batch/test-job created
W0916 10:28:17.262] E0916 10:28:16.910683   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:17.263] E0916 10:28:17.030225   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:17.263] E0916 10:28:17.114655   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:17.264] I0916 10:28:17.158637   52896 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568629696-4573", Name:"test-job", UID:"78452e73-da08-43db-8429-dbec50684a39", APIVersion:"batch/v1", ResourceVersion:"1418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-s757n
I0916 10:28:17.364] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I0916 10:28:17.366] (Bjob.batch "test-job" deleted
I0916 10:28:17.444] job.batch/test-job-pi created
I0916 10:28:17.546] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I0916 10:28:17.630] (Bjob.batch "test-job-pi" deleted
I0916 10:28:17.724] cronjob.batch/test-pi created
W0916 10:28:17.825] I0916 10:28:17.437763   52896 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568629696-4573", Name:"test-job-pi", UID:"ba4d4a12-90ad-435b-9929-9394ae05331c", APIVersion:"batch/v1", ResourceVersion:"1425", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-l7p6x
W0916 10:28:17.826] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 10:28:17.826] E0916 10:28:17.806533   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:17.827] I0916 10:28:17.823763   52896 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568629696-4573", Name:"my-pi", UID:"73bee61b-79fd-4035-a67f-fde58365fa65", APIVersion:"batch/v1", ResourceVersion:"1434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-xmrvz
W0916 10:28:17.912] E0916 10:28:17.912382   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:18.013] job.batch/my-pi created
I0916 10:28:18.014] Successful
I0916 10:28:18.015] message:[perl -Mbignum=bpi -wle print bpi(10)]
I0916 10:28:18.015] has:perl -Mbignum=bpi -wle print bpi(10)
I0916 10:28:18.015] job.batch "my-pi" deleted
I0916 10:28:18.097] cronjob.batch "test-pi" deleted
... skipping 7 lines ...
I0916 10:28:18.205] +++ [0916 10:28:18] Creating namespace namespace-1568629698-20467
I0916 10:28:18.285] namespace/namespace-1568629698-20467 created
I0916 10:28:18.362] Context "test" modified.
I0916 10:28:18.370] +++ [0916 10:28:18] Testing pod templates
I0916 10:28:18.466] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:28:18.665] (Bpodtemplate/nginx created
W0916 10:28:18.766] E0916 10:28:18.031616   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:18.766] E0916 10:28:18.116280   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:18.767] I0916 10:28:18.662400   49368 controller.go:606] quota admission added evaluator for: podtemplates
W0916 10:28:18.808] E0916 10:28:18.808074   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:18.909] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 10:28:18.909] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0916 10:28:18.910] nginx   nginx        nginx    name=nginx
W0916 10:28:19.010] E0916 10:28:18.913947   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:19.033] E0916 10:28:19.033119   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:19.119] E0916 10:28:19.118626   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:19.220] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0916 10:28:19.220] (Bpodtemplate "nginx" deleted
I0916 10:28:19.286] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:28:19.302] (B+++ exit code: 0
I0916 10:28:19.342] Recording: run_service_tests
I0916 10:28:19.343] Running command: run_service_tests
... skipping 2 lines ...
I0916 10:28:19.378] +++ working dir: /go/src/k8s.io/kubernetes
I0916 10:28:19.381] +++ command: run_service_tests
I0916 10:28:19.458] Context "test" modified.
I0916 10:28:19.466] +++ [0916 10:28:19] Testing kubectl(v1:services)
I0916 10:28:19.561] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:28:19.734] (Bservice/redis-master created
W0916 10:28:19.835] E0916 10:28:19.809638   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:19.916] E0916 10:28:19.915763   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:20.017] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0916 10:28:20.017] (Bcore.sh:864: Successful describe services redis-master:
I0916 10:28:20.017] Name:              redis-master
I0916 10:28:20.017] Namespace:         default
I0916 10:28:20.017] Labels:            app=redis
I0916 10:28:20.018]                    role=master
... skipping 35 lines ...
I0916 10:28:20.209] IP:                10.0.0.78
I0916 10:28:20.209] Port:              <unset>  6379/TCP
I0916 10:28:20.209] TargetPort:        6379/TCP
I0916 10:28:20.209] Endpoints:         <none>
I0916 10:28:20.210] Session Affinity:  None
I0916 10:28:20.210] (B
W0916 10:28:20.310] E0916 10:28:20.034708   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:20.311] E0916 10:28:20.120598   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:20.411] core.sh:870: Successful describe
I0916 10:28:20.411] Name:              redis-master
I0916 10:28:20.411] Namespace:         default
I0916 10:28:20.412] Labels:            app=redis
I0916 10:28:20.412]                    role=master
I0916 10:28:20.412]                    tier=backend
... skipping 165 lines ...
I0916 10:28:21.042]   selector:
I0916 10:28:21.042]     role: padawan
I0916 10:28:21.043]   sessionAffinity: None
I0916 10:28:21.043]   type: ClusterIP
I0916 10:28:21.043] status:
I0916 10:28:21.043]   loadBalancer: {}
W0916 10:28:21.143] E0916 10:28:20.811036   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:21.144] E0916 10:28:20.917650   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:21.144] E0916 10:28:21.036145   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:21.144] E0916 10:28:21.123221   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:21.245] service/redis-master selector updated
I0916 10:28:21.262] core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
I0916 10:28:21.347] (Bservice/redis-master selector updated
I0916 10:28:21.451] core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0916 10:28:21.534] (BapiVersion: v1
I0916 10:28:21.534] kind: Service
... skipping 17 lines ...
I0916 10:28:21.536]   selector:
I0916 10:28:21.536]     role: padawan
I0916 10:28:21.536]   sessionAffinity: None
I0916 10:28:21.536]   type: ClusterIP
I0916 10:28:21.536] status:
I0916 10:28:21.536]   loadBalancer: {}
W0916 10:28:21.637] error: you must specify resources by --filename when --local is set.
W0916 10:28:21.637] Example resource specifications include:
W0916 10:28:21.637]    '-f rsrc.yaml'
W0916 10:28:21.637]    '--filename=rsrc.json'
I0916 10:28:21.738] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0916 10:28:21.892] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0916 10:28:21.982] (Bservice "redis-master" deleted
I0916 10:28:22.087] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:28:22.184] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:28:22.357] (Bservice/redis-master created
W0916 10:28:22.457] I0916 10:28:21.796574   52896 namespace_controller.go:171] Namespace has been deleted test-jobs
W0916 10:28:22.458] E0916 10:28:21.812603   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:22.458] E0916 10:28:21.919292   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:22.458] E0916 10:28:22.037633   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:22.459] E0916 10:28:22.125216   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:22.559] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0916 10:28:22.565] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0916 10:28:22.737] (Bservice/service-v1-test created
W0916 10:28:22.838] E0916 10:28:22.814217   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:22.921] E0916 10:28:22.921141   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:23.022] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0916 10:28:23.028] (Bservice/service-v1-test replaced
W0916 10:28:23.128] E0916 10:28:23.039155   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:23.129] E0916 10:28:23.126662   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:23.229] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0916 10:28:23.230] (Bservice "redis-master" deleted
I0916 10:28:23.323] service "service-v1-test" deleted
I0916 10:28:23.424] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:28:23.525] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:28:23.705] (Bservice/redis-master created
W0916 10:28:23.816] E0916 10:28:23.815548   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:23.917] service/redis-slave created
I0916 10:28:23.999] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0916 10:28:24.089] (BSuccessful
I0916 10:28:24.089] message:NAME           RSRC
I0916 10:28:24.090] kubernetes     145
I0916 10:28:24.090] redis-master   1470
I0916 10:28:24.090] redis-slave    1473
I0916 10:28:24.090] has:redis-master
I0916 10:28:24.189] core.sh:979: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0916 10:28:24.278] (Bservice "redis-master" deleted
I0916 10:28:24.288] service "redis-slave" deleted
W0916 10:28:24.389] E0916 10:28:23.922585   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:24.389] E0916 10:28:24.040743   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:24.390] E0916 10:28:24.128274   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:24.490] core.sh:986: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:28:24.499] (Bcore.sh:990: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:28:24.577] (Bservice/beep-boop created
I0916 10:28:24.680] core.sh:994: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0916 10:28:24.775] (Bcore.sh:998: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0916 10:28:24.861] (Bservice "beep-boop" deleted
W0916 10:28:24.962] E0916 10:28:24.816962   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:24.963] E0916 10:28:24.924128   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:25.042] E0916 10:28:25.042255   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:25.130] E0916 10:28:25.129969   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:25.141] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0916 10:28:25.161] I0916 10:28:25.160955   52896 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"33ea732b-31e1-4bcb-81aa-1ead6d4ed44e", APIVersion:"apps/v1", ResourceVersion:"1485", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W0916 10:28:25.167] I0916 10:28:25.166745   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"7c31b2a4-969e-42c6-aaf4-a025c5c97072", APIVersion:"apps/v1", ResourceVersion:"1486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-pn2nc
W0916 10:28:25.172] I0916 10:28:25.171843   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"7c31b2a4-969e-42c6-aaf4-a025c5c97072", APIVersion:"apps/v1", ResourceVersion:"1486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-szk52
I0916 10:28:25.273] core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0916 10:28:25.273] (Bcore.sh:1009: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 16 lines ...
I0916 10:28:25.904] +++ [0916 10:28:25] Creating namespace namespace-1568629705-23474
I0916 10:28:25.987] namespace/namespace-1568629705-23474 created
I0916 10:28:26.069] Context "test" modified.
I0916 10:28:26.076] +++ [0916 10:28:26] Testing kubectl(v1:daemonsets)
I0916 10:28:26.174] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:28:26.362] (Bdaemonset.apps/bind created
W0916 10:28:26.463] E0916 10:28:25.818236   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:26.464] E0916 10:28:25.925702   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:26.464] E0916 10:28:26.045141   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:26.464] E0916 10:28:26.131548   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:26.465] I0916 10:28:26.358820   49368 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0916 10:28:26.465] I0916 10:28:26.370124   49368 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0916 10:28:26.565] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I0916 10:28:26.646] (Bdaemonset.apps/bind configured
I0916 10:28:26.753] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I0916 10:28:26.848] (Bdaemonset.apps/bind image updated
W0916 10:28:26.949] E0916 10:28:26.819857   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:26.950] E0916 10:28:26.927177   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:27.047] E0916 10:28:27.046804   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:27.133] E0916 10:28:27.133045   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:27.234] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I0916 10:28:27.234] (Bdaemonset.apps/bind env updated
I0916 10:28:27.234] apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
I0916 10:28:27.253] (Bdaemonset.apps/bind resource requirements updated
I0916 10:28:27.358] apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
I0916 10:28:27.453] (Bdaemonset.apps/bind restarted
... skipping 9 lines ...
I0916 10:28:27.759] +++ [0916 10:28:27] Creating namespace namespace-1568629707-30468
I0916 10:28:27.841] namespace/namespace-1568629707-30468 created
I0916 10:28:27.918] Context "test" modified.
I0916 10:28:27.926] +++ [0916 10:28:27] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I0916 10:28:28.028] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:28:28.206] (Bdaemonset.apps/bind created
W0916 10:28:28.307] E0916 10:28:27.821541   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:28.308] E0916 10:28:27.928750   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:28.309] E0916 10:28:28.048967   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:28.309] E0916 10:28:28.134817   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:28.410] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1568629707-30468"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0916 10:28:28.411]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I0916 10:28:28.421] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I0916 10:28:28.531] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 10:28:28.637] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 10:28:28.823] (Bdaemonset.apps/bind configured
W0916 10:28:28.924] E0916 10:28:28.822752   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:28.931] E0916 10:28:28.931396   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:29.032] apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0916 10:28:29.039] (Bapps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0916 10:28:29.153] (Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0916 10:28:29.260] (Bapps.sh:80: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1568629707-30468"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0916 10:28:29.262]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1568629707-30468"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0916 10:28:29.262]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
... skipping 9 lines ...
I0916 10:28:29.365]   Volumes:	<none>
I0916 10:28:29.365]  (dry run)
I0916 10:28:29.470] apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0916 10:28:29.573] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0916 10:28:29.674] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0916 10:28:29.786] (Bdaemonset.apps/bind rolled back
W0916 10:28:29.887] E0916 10:28:29.050900   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:29.888] E0916 10:28:29.136410   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:29.888] E0916 10:28:29.824266   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:29.933] E0916 10:28:29.933005   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:30.034] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 10:28:30.034] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 10:28:30.107] (BSuccessful
I0916 10:28:30.107] message:error: unable to find specified revision 1000000 in history
I0916 10:28:30.107] has:unable to find specified revision
I0916 10:28:30.202] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0916 10:28:30.301] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0916 10:28:30.407] (Bdaemonset.apps/bind rolled back
W0916 10:28:30.508] E0916 10:28:30.052390   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:30.509] E0916 10:28:30.138150   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:30.610] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0916 10:28:30.613] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0916 10:28:30.711] (Bapps.sh:99: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0916 10:28:30.795] (Bdaemonset.apps "bind" deleted
I0916 10:28:30.819] +++ exit code: 0
I0916 10:28:30.860] Recording: run_rc_tests
... skipping 6 lines ...
I0916 10:28:30.990] namespace/namespace-1568629710-12611 created
I0916 10:28:31.065] Context "test" modified.
I0916 10:28:31.073] +++ [0916 10:28:31] Testing kubectl(v1:replicationcontrollers)
I0916 10:28:31.171] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:28:31.341] (Breplicationcontroller/frontend created
I0916 10:28:31.439] replicationcontroller "frontend" deleted
W0916 10:28:31.540] E0916 10:28:30.826039   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:31.540] E0916 10:28:30.934766   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:31.541] E0916 10:28:31.053872   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:31.541] E0916 10:28:31.139909   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:31.542] I0916 10:28:31.346901   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"frontend", UID:"bc4106ae-27c0-460a-9145-1777e92ee0eb", APIVersion:"v1", ResourceVersion:"1563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tdkdp
W0916 10:28:31.542] I0916 10:28:31.349894   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"frontend", UID:"bc4106ae-27c0-460a-9145-1777e92ee0eb", APIVersion:"v1", ResourceVersion:"1563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fvdd4
W0916 10:28:31.543] I0916 10:28:31.350405   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"frontend", UID:"bc4106ae-27c0-460a-9145-1777e92ee0eb", APIVersion:"v1", ResourceVersion:"1563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-748vr
I0916 10:28:31.643] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:28:31.644] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0916 10:28:31.807] (Breplicationcontroller/frontend created
W0916 10:28:31.908] I0916 10:28:31.810812   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"frontend", UID:"3d5caa6c-390a-4104-a4f8-400deff7dd09", APIVersion:"v1", ResourceVersion:"1580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mmknm
W0916 10:28:31.908] I0916 10:28:31.814529   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"frontend", UID:"3d5caa6c-390a-4104-a4f8-400deff7dd09", APIVersion:"v1", ResourceVersion:"1580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6bqpk
W0916 10:28:31.909] I0916 10:28:31.814989   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"frontend", UID:"3d5caa6c-390a-4104-a4f8-400deff7dd09", APIVersion:"v1", ResourceVersion:"1580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-czh49
W0916 10:28:31.909] E0916 10:28:31.827293   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:31.936] E0916 10:28:31.936250   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:32.037] core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0916 10:28:32.080] (Bcore.sh:1061: Successful describe rc frontend:
I0916 10:28:32.080] Name:         frontend
I0916 10:28:32.080] Namespace:    namespace-1568629710-12611
I0916 10:28:32.080] Selector:     app=guestbook,tier=frontend
I0916 10:28:32.081] Labels:       app=guestbook
I0916 10:28:32.081]               tier=frontend
I0916 10:28:32.081] Annotations:  <none>
I0916 10:28:32.081] Replicas:     3 current / 3 desired
I0916 10:28:32.081] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:28:32.081] Pod Template:
I0916 10:28:32.081]   Labels:  app=guestbook
I0916 10:28:32.082]            tier=frontend
I0916 10:28:32.082]   Containers:
I0916 10:28:32.082]    php-redis:
I0916 10:28:32.082]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 10:28:32.201] Namespace:    namespace-1568629710-12611
I0916 10:28:32.202] Selector:     app=guestbook,tier=frontend
I0916 10:28:32.202] Labels:       app=guestbook
I0916 10:28:32.202]               tier=frontend
I0916 10:28:32.202] Annotations:  <none>
I0916 10:28:32.203] Replicas:     3 current / 3 desired
I0916 10:28:32.203] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:28:32.203] Pod Template:
I0916 10:28:32.203]   Labels:  app=guestbook
I0916 10:28:32.204]            tier=frontend
I0916 10:28:32.204]   Containers:
I0916 10:28:32.204]    php-redis:
I0916 10:28:32.205]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0916 10:28:32.208]   Type    Reason            Age   From                    Message
I0916 10:28:32.208]   ----    ------            ----  ----                    -------
I0916 10:28:32.208]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-mmknm
I0916 10:28:32.208]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-6bqpk
I0916 10:28:32.209]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-czh49
I0916 10:28:32.209] (B
W0916 10:28:32.310] E0916 10:28:32.055443   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:32.310] E0916 10:28:32.141297   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:32.411] core.sh:1065: Successful describe
I0916 10:28:32.412] Name:         frontend
I0916 10:28:32.412] Namespace:    namespace-1568629710-12611
I0916 10:28:32.413] Selector:     app=guestbook,tier=frontend
I0916 10:28:32.413] Labels:       app=guestbook
I0916 10:28:32.413]               tier=frontend
I0916 10:28:32.414] Annotations:  <none>
I0916 10:28:32.414] Replicas:     3 current / 3 desired
I0916 10:28:32.414] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:28:32.415] Pod Template:
I0916 10:28:32.415]   Labels:  app=guestbook
I0916 10:28:32.415]            tier=frontend
I0916 10:28:32.416]   Containers:
I0916 10:28:32.416]    php-redis:
I0916 10:28:32.416]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0916 10:28:32.437] Namespace:    namespace-1568629710-12611
I0916 10:28:32.437] Selector:     app=guestbook,tier=frontend
I0916 10:28:32.438] Labels:       app=guestbook
I0916 10:28:32.438]               tier=frontend
I0916 10:28:32.438] Annotations:  <none>
I0916 10:28:32.438] Replicas:     3 current / 3 desired
I0916 10:28:32.438] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:28:32.438] Pod Template:
I0916 10:28:32.438]   Labels:  app=guestbook
I0916 10:28:32.438]            tier=frontend
I0916 10:28:32.438]   Containers:
I0916 10:28:32.438]    php-redis:
I0916 10:28:32.438]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0916 10:28:32.598] Namespace:    namespace-1568629710-12611
I0916 10:28:32.598] Selector:     app=guestbook,tier=frontend
I0916 10:28:32.599] Labels:       app=guestbook
I0916 10:28:32.599]               tier=frontend
I0916 10:28:32.599] Annotations:  <none>
I0916 10:28:32.599] Replicas:     3 current / 3 desired
I0916 10:28:32.600] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:28:32.600] Pod Template:
I0916 10:28:32.600]   Labels:  app=guestbook
I0916 10:28:32.600]            tier=frontend
I0916 10:28:32.600]   Containers:
I0916 10:28:32.600]    php-redis:
I0916 10:28:32.600]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 10:28:32.716] Namespace:    namespace-1568629710-12611
I0916 10:28:32.716] Selector:     app=guestbook,tier=frontend
I0916 10:28:32.716] Labels:       app=guestbook
I0916 10:28:32.717]               tier=frontend
I0916 10:28:32.717] Annotations:  <none>
I0916 10:28:32.717] Replicas:     3 current / 3 desired
I0916 10:28:32.717] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:28:32.717] Pod Template:
I0916 10:28:32.717]   Labels:  app=guestbook
I0916 10:28:32.717]            tier=frontend
I0916 10:28:32.717]   Containers:
I0916 10:28:32.717]    php-redis:
I0916 10:28:32.717]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0916 10:28:32.832] Namespace:    namespace-1568629710-12611
I0916 10:28:32.832] Selector:     app=guestbook,tier=frontend
I0916 10:28:32.833] Labels:       app=guestbook
I0916 10:28:32.833]               tier=frontend
I0916 10:28:32.833] Annotations:  <none>
I0916 10:28:32.833] Replicas:     3 current / 3 desired
I0916 10:28:32.833] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:28:32.833] Pod Template:
I0916 10:28:32.833]   Labels:  app=guestbook
I0916 10:28:32.833]            tier=frontend
I0916 10:28:32.834]   Containers:
I0916 10:28:32.834]    php-redis:
I0916 10:28:32.834]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0916 10:28:32.952] Namespace:    namespace-1568629710-12611
I0916 10:28:32.952] Selector:     app=guestbook,tier=frontend
I0916 10:28:32.952] Labels:       app=guestbook
I0916 10:28:32.952]               tier=frontend
I0916 10:28:32.952] Annotations:  <none>
I0916 10:28:32.953] Replicas:     3 current / 3 desired
I0916 10:28:32.953] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0916 10:28:32.953] Pod Template:
I0916 10:28:32.953]   Labels:  app=guestbook
I0916 10:28:32.953]            tier=frontend
I0916 10:28:32.953]   Containers:
I0916 10:28:32.953]    php-redis:
I0916 10:28:32.954]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0916 10:28:32.955]   ----    ------            ----  ----                    -------
I0916 10:28:32.955]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-mmknm
I0916 10:28:32.956]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-6bqpk
I0916 10:28:32.956]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-czh49
I0916 10:28:33.053] (Bcore.sh:1079: Successful get rc frontend {{.spec.replicas}}: 3
I0916 10:28:33.156] (Breplicationcontroller/frontend scaled
W0916 10:28:33.257] E0916 10:28:32.828633   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:33.258] E0916 10:28:32.938032   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:33.258] E0916 10:28:33.057228   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:33.258] E0916 10:28:33.142805   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:33.259] I0916 10:28:33.163585   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"frontend", UID:"3d5caa6c-390a-4104-a4f8-400deff7dd09", APIVersion:"v1", ResourceVersion:"1589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-mmknm
I0916 10:28:33.359] core.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I0916 10:28:33.359] (Bcore.sh:1087: Successful get rc frontend {{.spec.replicas}}: 2
I0916 10:28:33.549] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
I0916 10:28:33.649] (Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0916 10:28:33.739] (Breplicationcontroller/frontend scaled
I0916 10:28:33.841] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0916 10:28:33.938] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0916 10:28:34.022] (Breplicationcontroller/frontend scaled
I0916 10:28:34.125] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0916 10:28:34.210] (Breplicationcontroller "frontend" deleted
W0916 10:28:34.311] error: Expected replicas to be 3, was 2
W0916 10:28:34.311] I0916 10:28:33.741899   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"frontend", UID:"3d5caa6c-390a-4104-a4f8-400deff7dd09", APIVersion:"v1", ResourceVersion:"1596", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-96tql
W0916 10:28:34.311] E0916 10:28:33.830000   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:34.312] E0916 10:28:33.939665   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:34.312] I0916 10:28:34.027435   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"frontend", UID:"3d5caa6c-390a-4104-a4f8-400deff7dd09", APIVersion:"v1", ResourceVersion:"1601", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-96tql
W0916 10:28:34.312] E0916 10:28:34.058678   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:34.312] E0916 10:28:34.144312   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:34.401] I0916 10:28:34.400533   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"redis-master", UID:"80e1e885-704a-413a-af32-29e90fca8afb", APIVersion:"v1", ResourceVersion:"1612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-7zttk
I0916 10:28:34.502] replicationcontroller/redis-master created
I0916 10:28:34.578] replicationcontroller/redis-slave created
W0916 10:28:34.678] I0916 10:28:34.582116   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"redis-slave", UID:"721e8fcc-0c5d-4124-babf-da976f625530", APIVersion:"v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-v4pwq
W0916 10:28:34.679] I0916 10:28:34.585181   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"redis-slave", UID:"721e8fcc-0c5d-4124-babf-da976f625530", APIVersion:"v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-lrwdq
W0916 10:28:34.686] I0916 10:28:34.685236   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568629710-12611", Name:"redis-master", UID:"80e1e885-704a-413a-af32-29e90fca8afb", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-p7kxc
... skipping 4 lines ...
I0916 10:28:34.798] replicationcontroller/redis-master scaled
I0916 10:28:34.799] replicationcontroller/redis-slave scaled
I0916 10:28:34.799] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
I0916 10:28:34.886] (Bcore.sh:1118: Successful get rc redis-slave {{.spec.replicas}}: 4
I0916 10:28:34.970] (Breplicationcontroller "redis-master" deleted
I0916 10:28:34.977] replicationcontroller "redis-slave" deleted
W0916 10:28:35.078] E0916 10:28:34.831538   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:35.079] E0916 10:28:34.941286   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:35.079] E0916 10:28:35.060026   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:35.147] E0916 10:28:35.146186   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:35.171] I0916 10:28:35.170858   52896 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568629710-12611", Name:"nginx-deployment", UID:"0d70ac31-49b3-4922-b645-e2d52cabbedf", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0916 10:28:35.174] I0916 10:28:35.173901   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629710-12611", Name:"nginx-deployment-6986c7bc94", UID:"19c075f3-04ac-4634-b32b-0c4fe1c1ad55", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-8zjt5
W0916 10:28:35.178] I0916 10:28:35.177873   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629710-12611", Name:"nginx-deployment-6986c7bc94", UID:"19c075f3-04ac-4634-b32b-0c4fe1c1ad55", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-xj8lj
W0916 10:28:35.179] I0916 10:28:35.177927   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629710-12611", Name:"nginx-deployment-6986c7bc94", UID:"19c075f3-04ac-4634-b32b-0c4fe1c1ad55", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-4tnmq
I0916 10:28:35.279] deployment.apps/nginx-deployment created
I0916 10:28:35.280] deployment.apps/nginx-deployment scaled
... skipping 4 lines ...
W0916 10:28:35.573] I0916 10:28:35.287615   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629710-12611", Name:"nginx-deployment-6986c7bc94", UID:"19c075f3-04ac-4634-b32b-0c4fe1c1ad55", APIVersion:"apps/v1", ResourceVersion:"1673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-8zjt5
I0916 10:28:35.674] Successful
I0916 10:28:35.674] message:service/expose-test-deployment exposed
I0916 10:28:35.674] has:service/expose-test-deployment exposed
I0916 10:28:35.675] service "expose-test-deployment" deleted
I0916 10:28:35.782] Successful
I0916 10:28:35.783] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0916 10:28:35.783] See 'kubectl expose -h' for help and examples
I0916 10:28:35.783] has:invalid deployment: no selectors
W0916 10:28:35.884] E0916 10:28:35.833664   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:35.943] E0916 10:28:35.943127   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:35.967] I0916 10:28:35.966946   52896 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568629710-12611", Name:"nginx-deployment", UID:"28d7aecd-125a-42e2-a1ae-461e319b5d39", APIVersion:"apps/v1", ResourceVersion:"1698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0916 10:28:35.972] I0916 10:28:35.971627   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629710-12611", Name:"nginx-deployment-6986c7bc94", UID:"5f9c4e01-0f29-4e3f-9b8c-c5973a8bf30d", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-2d54g
W0916 10:28:35.975] I0916 10:28:35.974431   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629710-12611", Name:"nginx-deployment-6986c7bc94", UID:"5f9c4e01-0f29-4e3f-9b8c-c5973a8bf30d", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-52vl7
W0916 10:28:35.976] I0916 10:28:35.975244   52896 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568629710-12611", Name:"nginx-deployment-6986c7bc94", UID:"5f9c4e01-0f29-4e3f-9b8c-c5973a8bf30d", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-lsppg
W0916 10:28:36.062] E0916 10:28:36.061440   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:36.148] E0916 10:28:36.148089   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0916 10:28:36.249] deployment.apps/nginx-deployment created
I0916 10:28:36.250] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0916 10:28:36.250] (Bservice/nginx-deployment exposed
I0916 10:28:36.271] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0916 10:28:36.354] (Bdeployment.apps "nginx-deployment" deleted
I0916 10:28:36.363] service "nginx-deployment" deleted
... skipping 4 lines ...
I0916 10:28:36.757] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I0916 10:28:36.776] (Bservice/frontend exposed
I0916 10:28:36.879] core.sh:1161: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0916 10:28:36.974] (Bservice/frontend-2 exposed
I0916 10:28:37.079] core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
I0916 10:28:37.249] (Bpod/valid-pod created
W0916 10:28:37.350] E0916 10:28:36.835654   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:37.351] E0916 10:28:36.944739   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:37.351] E0916 10:28:37.063915   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0916 10:28:37.351] E0916 10:28:37.149528   52896 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.Parti